Critical Intel

Realistic Graphics Are Broken

CriticalIntel_3x3

E3 is nearly upon us. Somewhere in Los Angeles, event planners are marking maps of the expo floor. PR reps proofread PowerPoints. Developers brew coffee as they pull yet another late night ironing the wrinkles out of a demo. Microsoft and Sony sit in the wings like gladiators waiting for the gates to rise, and the weapons they will battle with… are graphics.

The fact that touting graphics is a foolish strategy doesn’t mean they won’t do it. At Sony’s PS4 reveal, they trotted out David Cage to talk about creating realistic human faces. The only game-based portion of the Xbox One event consisted of Infinity Ward bragging about animating arm hair and a motion-captured dog. Despite many looming questions, console makers are still selling us systems based on how good their games look.

It’s a mistake. Realistic graphics are a dead end for a whole host of reasons. The uncanny valley. Spiking production costs. Diminishing returns as we creep closer to photo-realism. But I’ve yet to see anyone pin down the greatest problem: that game mechanics haven’t caught up with graphical improvements. In other words, we can make a game look like the real world, but we don’t have the technology to let the player interact with the world realistically. This graphics-mechanics gap causes a psychological reaction that we’ve only started to explore – and that has defined many of the problems of this console generation. We’re living in the uncanny valley now.

Put simply, the more realistic games look, the more we expect them to behave like the real world. Games have always been engrossing and compelling, but it wasn’t until graphics reached a certain fidelity that we started to hear the word immersion getting tossed around. Suddenly, instead of simply amusing the player or telling a story, games became about creating a believable digital world. While gameplay continued to advance apace, graphics rapidly outstripped it. Devs found that they could make a world look more real by putting a high-pixel gloss on an already-working set of game rules, with only a few mechanical tweaks or added sections to advance interactivity. Better still, improving graphics made games easier to sell the game to the public – after all, it’s difficult to advertise a new combat system on TV or in a magazine, but you can show off handsome screenshots. However, while artists added polygons by the thousands, the programmers struggled to keep up with how the world actually behaved. The results were hallways with a half-dozen doors, only one of which opened. Characters you could marry that don’t behave in any way like a wife or husband. In a now-infamous 1994 review, Edge Magazine downgraded Doom because you couldn’t talk to the demons. That’s been an internet joke for awhile, but people who use the review as a punchline miss the review’s point: that Doom leaned on its visuals and masked its shallow world with “impressive padding” like beautiful mountains you could never visit and 3D-animated enemies that were reused ad nauseum. In other words, stunning visuals reminded the writer of what he couldn’t do rather than what he could. Nearly twenty years later, graphics have become near photo-realistic but we’re still stuck with shoot, reload, pick up object, place object, inventory and menu screens. And the more games look like the real world, the more these mechanics seem increasingly atavistic because they’re so game-like – we’ve created a medium where we’re invited to immerse ourselves in real-looking environments, but our method of interactivity breaks that immersion.

Early in game history this was less of a problem, simply because games looked like games. During the era of Atari, the Super Nintendo, and the Playstation 2, unrealistic elements weren’t questioned since players knew that the medium had limitations. We mentally dealt with extra lives, invisible walls and non-opening doors the same way a theater audience accepts that an actor miming riding a horse onstage is, in the context of the play, riding a horse. Most artistic mediums have conventions like this, where audiences allow artists to fudge or hand-wave details in order to tell a story within technical limitations. For example, sitcom audiences accept that no matter what happens in a given episode, the ending will uphold the status quo – Homer Simpson might go to space or win the lottery, but by the end he’s still a lower-middle class suburban schlub. Jackie Chan can beat up hundreds of guys because he’s Jackie Chan. In games, players are limited in how they can affect the game world because it’s the game world.

The closer we got to photorealism, though, the more gameplay conventions started to butt up against how we perceive games. Few people argued that Metal Slug and Contra glorified war and dehumanized foreigners because in those games both the guns and the enemies were fairly abstract. You could tell a machinegun from a flamethrower, sure, but the guns themselves had no real-world analogue. Enemies didn’t crumple or hold their wounds when you shot them. Their dead bodies didn’t lie in the street, faces staring to the sky as you walk past them.

Suddenly, killing waves of enemies – a perfectly normal thing in the abstract language of games – became uncomfortable for a growing number of people. It also became harder to explain to outsiders why violent actions were permissible in the context of a game. When you try and explain to non-gamers that it’s fun to kill your friends in Battlefield 3, they look at you like you’re deranged for two main reasons: Battlefield 3 looks fairly real, and in real life killing is not fun. To non-players, ending a life is difficult, unpleasant and emotional, while to players in the context of a team deathmatch, it’s nothing more than scoring a goal. Because we understand that death is not a big deal in the language of videogames, friends can kill friends with no ill-will afterward. The fact that we wouldn’t find gunfights amusing in real life goes without saying – we don’t worry about it the same way we never shed tears for the Nazis that Indiana Jones kills in Raiders of the Lost Ark. It’s just a convention of the medium. To us it’s digital cops and robbers, but to an outsider it looks twisted and cruel.

Recommended Videos
L.A. Noire Screenshot

But recently, even longtime gamers with a background in game conventions have begun to question the gap between the narratives and the action of gameplay. If you’ve read game journalism in the past year, you’ve no doubt come across the term ludonarrative dissonance. Though the phrase sounds painfully academic – even for an occasionally pedantic soul like me – it accurately describes the feeling that of game mechanics that don’t jive with the story. Nathan Drake is a likable and ethical guy in the cutscenes for example, but during the gameplay itself he guns down thousands of people over treasure. Over the last few years we’ve heard a lot about this ludonarrative dissonance, but there isn’t a lot of exploration about what causes it. Interestingly, it’s caused by the same thing that makes actors onstage mime riding a horse – that is, technical limitations. There are so many shooting games, for example, because shooting mechanics work well with existing technology. That sounds like a cop-out, with studios doing what’s easy and safe, but it’s hard to blame them for not wanting to stake cash on new mechanics that have interactions more complicated than the straightforwardness of combat. As Heavy Rain or L.A. Noire can attest, game mechanics that represent realistic conversations and environmental interactions are still in their infancy. Why can’t you talk to the demons? Because it would involve a clunky and immersion-breaking system, and it’s just easier to shoot them.

This gap has become such a problem that for the last couple of years designers have actually written their games around recontextualizing the dissonance. Spec Ops: The Line played with the idea by having the violence take a mental toll on Captain Walker. Far Cry 3 and Blood Dragon satirized game conventions like collecting items and leveling up. Even Uncharted 3 dealt with Drake’s compulsively self-destructive lifestyle. These games worked around the problem rather than solved it, but the general consensus seems to be that we’re approaching a point where realistic visuals are undermining the conventions and mechanics that have defined games to this point. When we see realistic worlds we can interact with, we instinctively want to force real-world logic onto them – and the structure isn’t there to support real-world logic. David Cage can show me all the soulful old men he wants, but I know that when I speak to those emotional eyes I’ll do so with an immersion-breaking conversation wheel or “Press X To” command that reminds me that yes, I’m playing a game. This industry is so hung up on reality – and the justified fear that game-like mechanics break immersion into their facsimile worlds – that they’ve led an effort to harness our physical bodies into the game. From motion controls to the Oculus Rift, console and peripheral makers made it their goal to remove “obstacles” between the player and the game. Take away the controllers and screens, they seem to believe, and immersion will be complete. While I’m all for new control schemes, I’d argue that controllers don’t break immersion – they didn’t when I played Skyrim – and I was engrossed in games well before realistic graphics became the norm.

In fact, many of the runaway successes of the last few years – from Angry Birds to Minecraft – have turned away from realism and embraced consciously game-like aesthetics. Such a route allows you to play with game mechanics rather than against them. Why do stone blocks float in air after I’ve mined everything around them? Because it’s a game, and it looks like one. My brain doesn’t expect a world made out of 1 x 1 Legos to follow the laws of physics. The solution to some of our problems with “realistic” games could come from abandoning the aesthetic, as these indie games have done out of necessity. Let games be games and don’t worry about making them look real.

Another option is to take ludonarrative dissonance head-on by putting a large amount of development behind bringing mechanics more in line with the visuals. Use the processing power of gaming PCs and the next generation of consoles not to give us prettier worlds, but ones that react to our actions more realistically. Studios are currently trying to close the gap by addressing the problem as part of the narrative like in Spec Ops and Far Cry 3: Blood Dragon, but playing off the dissonance and making fun of it is a stopgap measure. Snarky references and in-joke sarcasm get old fast. Creating game worlds that behave more realistically is going to be difficult and expensive, but someone has to do it eventually. Many experiments will fail, but the one that succeeds can create a unique game that’s sure to get attention.

Peripherals could also hold out some hope. While I’m a little cynical about motion controls, I do think devices like the Oculus Rift create opportunities for advances in mechanics. Likewise, while the Kinect has never been particularly useful, if it could respond to natural language speech like Siri it’s conceivable that we could eliminate conversation wheels at some point in the future. But none of these address the main problem, which is that games look too real for the systems behind them.

And perhaps we’ll just have to live with it. While ludonarrative dissonance is an issue and our supposedly “realistic” game worlds are little more than Hollywood set facades, we may just have to acknowledge the discrepancy and go about our lives. While “yes, it’s weird, but it’s okay because it’s a convention of the medium” doesn’t cover all the issues created by the reality gap between graphics and mechanics, it’s a useful tool to rely on until the technology improves. That doesn’t mean we need to stop writing about it, or talking about it, or trying to solve it, it just means that for now we have to work with the technology at hand – and be aware that photo-realistic worlds with limited player interactivity can create a strange feeling in the player.

And maybe one day, when we’ve made sufficient advancements in hardware and AI, we might even be able to talk to the demons.

Robert Rath is a freelance writer, novelist, and researcher based in Austin, Texas. You can follow his exploits at RobWritesPulp.com or on Twitter at @RobWritesPulp.

About the author

Diablo III Hits Consoles in September

Previous article

Super Smash Bros. Creator Confirms E3 Appearance

Next article