Gameplay over graphics!
This is what you’re going to see from a lot of gamers these days, especially those on “my side” of things. They aren’t wrong.
The problem is, these things are usually presented as a dichotomy, or at least discussed as one—you are either choosing good gameplay or good graphics. The reality is we’re looking at a field for both things, a matter of emphasis, often a matter of art and graphics actually do matter.
In fact, they matter a lot. Visuals sell games. Since you cannot really “show” what gameplay is like as an experience, you instead have to show what the game looks like on screen. Visuals sell consoles and pc hardware. They influence manufacturers to create new, better hardware which can have alternative “non-game” uses.
And that progress in hardware power can ultimately make games more accessible and more affordable over time by allowing things like emulation on very cheap hardware or the creation of consoles like the Nintendo Switch, which uses “off-the-shelf” parts to deliver good experiences at a reasonable price point. Those cheap parts exist because, at one time, there was an effort to improve visuals that encouraged the creation of new hardware, and then the industry pushed past that hardware to other, newer technology, thus making the old technology cheaper.
Most importantly, graphics allow gameplay, or perhaps more broadly, technology allows gameplay, and most gaming technology happens to be geared toward increasing the graphical fidelity in games. Graphics are, of course, not the only axis for innovation (as Nintendo themselves have proven for two decades), but they are a very important one.
Games are more than gameplay
The art of the game is its gameplay, as I have said before, but games are not exclusively play. Even chess is not exclusively play, as you have carved pieces and thus an aesthetic dimension. Video games are visual; therefore visuals are important.
Consider games like the Elder Scrolls series. The allure of these games is not the gameplay alone. In fact, many people who like the games are not enthusiastic about the core gameplay. The draw of Elder Scrolls is exploring and existing in another world—a world that is convincing. That “convincing” world includes the visual space, and the visuals for most of the games in the series were very good when released and have certainly gotten better for the last three mainline games due to modding. Community modders spend time overhauling Morrowind, not because of the gameplay but because they love the world itself and wanted themselves and others to experience it in a more convincing way.
I took some time after a family death this year to relax and play games, and one of the games I returned to was The Elder Scrolls V: Skyrim. I spent the better part of two nights setting up mods to play the game again, and most of these mods were for improved visuals—things like better lighting, higher-res textures, better models, and so on. Playing the game again, this time on a 6400X1080 monitor array that surrounded my entire visual space, was great and almost as good as when I first played it at launch. It was good not because of the gameplay (which is actually my least favorite of the series) but because of its atmosphere.
It looked good. It felt good. It was convincing.
And that, ultimately, is the draw of games of this sort, whether they are an open-world game like Skyrim (or its “mudgenre” derivatives, which despite my criticisms can be fun), or an MMORPG, is that they give life to an interesting world you want to spend time in. Part of that life is the visuals, the graphics. We can’t pretend like they don’t matter or that some of our favorite games would exist without an emphasis on the visuals at some point in the development of the industry or the game itself.
Horror games are what they are because the industry was able to move past 2d sprites.
Bethesda keeps selling Skyrim because people keep buying it; they keep buying it because it’s a world they still want to play in.
The Heroes known as “Enthusiasts”
Even if you aren’t interested in “Graphically intensive” games, keep in mind that at one point, Skyrim was indeed very graphically intensive and barely ran on the console hardware of 2011… and now it runs on the Switch.
Moreover, Super Mario Odyssey is only possible because that desire for great visuals pushed the market forward and has been pushing the industry for decades. If you’re a casual gamer content with simple graphics on the Nintendo, that’s great, but those visuals are good because nerds cared about dropping hundreds of dollars on swiftly outdated 3dfx cards in the 1990s or dropping hundreds of dollars on a NEO-GEO.
Enthusiasts really should be lauded for driving the whole industry toward a place where we can play all the games we play. Early adopters tend to pay for the privilege, and not every trend ends up panning out. VR was big two years ago, but it was also big in the 1990s. Enthusiasts just didn’t end up getting much use out of their 800-dollar 120p headsets, but even so, the fact that people bought them kept the dream alive long enough to see the much more convincing VR of today be born.
Likewise, stereoscopic 3d is nothing new. Before the 3ds and Nvidia’s foray into stereoscopy on the pc, there was the Virtual Boy (Nintendo’s biggest failure, by most accounts), and the Famicom (NES), along with the Sega Master System, supported 3d glasses back in the 1980s!
Most console gamers tend to remember the Playstation/Saturn/N64 generation as the one that brought 3d polygon gaming to the fore, but again, things tend to start earlier and often for more money. The Super Nintendo could produce full polygon games, such as Star Fox, using a co-processor onboard the cart, and its gameplay was based on earlier games (some in stereoscopic 3d) during the 8-bit era, such as Space Harrier. Racing games like Mario Kart or F-Zero, using sprite scaling to simulate dimensionality, took their cues from earlier racing games, too. If you go far enough back, you had 3D graphics in Battlezone and the Star Wars arcade games.
At the same time, the 1990s had a big 3d PC scene. 3dfx and Nvidia were fighting for supremacy of the accelerated market, and some of the graphics on PC during the era would put the PS1 to shame, but again, these expensive cards drove technology forward, so when casual gamers bought a GameCube, powered by PC hardware in the form of a PowerPC 750 processor (aka the G3) and an ATI GPU, they got the full benefit of gamer investment from the previous 5 years of enthusiasm in the more expensive PC market.
Things tend to build on each other, thanks to engineers wanting to push the envelope, but also thanks to enthusiasts willing to buy in – often at a loss. There used to be a rumor that Steve Wozniak included sound on the Apple II specifically for a Breakout game he had created for the computer.
This continues today. The jokes about the PC master race point to the truth: PC gamers enjoy (and are smug about) superior graphics… but they pay more (a lot more), too. Last I checked, an RTX 3090 card was running for almost 3 thousand dollars. You could buy six next-gen consoles for just the price of ONE part of a top-of-the-line advanced gaming PC.
So yes, the PC gamer could be enjoying an incredible graphical experience (if not bitcoin mining with his GPU) compared to the console user, but he’s also paying literally ten times the price in 2021.
I’ll admit, I’m a smug PC graphics whore. I dropped some dough on my rig about six years back, getting a GTX 970 card (I just couldn’t in good conscience shell out the extra 200 dollars for the 980), and it’s been a beast, especially for games of the time. I have the defense of using my computer for other things (making and editing videos, music production, etc., etc.), but I will admit, I wanted to play some sweet-looking games.
That one time graphics got worse.
It was with the N64/PS1/Saturn. The reason? The switch to full 3d games from 2d sprites. That generation (the 5th generation of consoles) has probably aged the worst out of anything coming after the Atari 2600. We went from bright, crisp, and colorful 2d graphics that were buttery smooth to 15fps fog games with blurry textures. The Saturn, as it turns out, wasn’t even designed with 3D in mind – but that’s where the industry was going, so they had to follow.
It was in 1995 that I really noticed the massive difference between the PC and consoles, where I was playing Mechwarrior 2 in 1280×1024 while my friends squinted their way through goldeneye.
It’s easy to, in 2021, call the games of that era eyesores, but at the time, the hype for 3d spaces was real. Gamers really, really wanted 3d. Magazines pushed it relentlessly, calling any 2d game “old-fashioned,” and most readers went along with it. We wanted virtual spaces, and we wanted them now!
The thing was, the games were also worse, and not just because of the poor visuals. Gameplay got worse, mostly because developers didn’t know how to make 3d games. Control schemes from that era vary wildly between serviceable and so awkward the game is unplayable. You couldn’t program 3d enemies the way you did sprites; they often acted ridiculously. There were also lots and lots of glitches. But we played the games and had fun with lots of them.
That’s another important thing to remember about not just those days, but every stage of the art’s development: we were impressed at the time. In 1996, 3D graphics were really cool, mind-blowing, even, because we weren’t used to them. 3D itself was FUN. Perhaps you remember the first time Mario64 booted up, and you were able to squish and manipulate a 3D Mario, probably wasting a few minutes just enjoying that rather than playing the actual game.
Now that visuals (and the gameplay that matches them) have moved forward so much, they seem weak. Only once you take a step back and wipe the nostalgia from your glasses can you see things more objectively – and that’s true whether it is the N64 or the Atari 2600.
But the industry had to go through those growing pains to get to the next stage, the 6th generation, where the graphics in 3d spaces were finally good enough to allow for great gameplay to match them. There were still great games during the PS1 period, just like every period, but some of them aren’t as great today because those “impressive” graphics are impressive no longer. People will probably feel the same about the PS5. Being impressed is easiest on first exposure.
The best metaphor I can give for the process and the arrival point is an artistic one. You can create beautiful pictures with just a pencil if you are a skilled artist. However, if you have access to a full set of paints along with the brushes and knowledge to use them, you can create something totally transcending a sketch.
Aesthetics vs. Fidelity
Here we get to a major mistake modern developers make: they think fidelity is the same thing as aesthetics.
That is to say, they believe photo-realistic details will make a game pleasing. They do not. A perfect rendering of a dog turd is still a dog turd, and while we might be impressed by the ray tracing on the turd’s moisture, that doesn’t mean we enjoy the sight. While highly realistic visuals can make a good first impression, they are rarely moving or pleasing the way that visuals created with an actual artistic approach are. At the same time, going back to the painting analogy, high fidelity can create more options for high-quality aesthetic design, just like getting more paints and brushes can give the painter more tools to work with, but the tools do not make the art—the artist does.
A great comparison that comes to mind is in the Final Fantasy series, specifically XII and XV. Final Fantasy XII was released late in the life-cycle of the PS2, and saw a re-release on the PS4. When I compare it to Final Fantasy XV, made for the PS4 and having much, much higher fidelity, XII ends up being a much more visually appealing game.
It has lower polygon counts, terrible lighting (by comparison), simpler character models, and sparser environments. Yet Final Fantasy XII has an artistic approach to its designs, which are at once highly fantastical and baroque, while Final Fantasy XV feels like a mash-up of “real world” influenced designs with low fantasy. You spend the first hours in the game essentially wandering around the desert surrounding Baker, California, and having been to and through Baker many times, it’s not an appealing or interesting place to be, even with monsters instead of coyotes. There are better environments in XV to be sure, but they all seem to have this sense of grounding that isn’t as appealing as the imaginative world of Final Fantasy XII, and that game, oddly, starts you off near a desert as well.
Within the same series, you can compare Final Fantasy VII and its PS4 era remake. Essentially the remake renders all the dieselpunk scenery and characters, including color palettes, in much more extreme fidelity. The two games, with radically different gameplay and visual capacities, nonetheless share the same aesthetic sense. There was a reason the game was hyped – it looked like the VR version of what people imagined Midgar to be, based on the isometric pre-rendered. backgrounds of the original. It’s a shift from a representation to the thing itself.
You need not stick to that series, either. There are many high-fidelity games from the last generation that end up just looking boring. Last of Us II comes to mind, with its ultra-gritty aesthetic and ultimately bland washing of color (to say nothing of the story), or most of the so-called “walking sims” of last gen. People want to escape to the fantastical, not look at something they see every day in their real life. Revisiting Heavy Rain, a PS3 original, makes this contrast between the artistic and the imitative all the more obvious, since what is considered “high fidelity” has moved on – the visuals are boring, and there isn’t much gameplay to carry things.
And as was pointed out to me on stream, it takes an artist to make art assets; it is comparatively cheap and easy to produce models of that which exists in “real life” or to simply buy the assets. Expecting a complex graphics engine to make images appealing is like expecting the brush to paint the picture for you.
Now we come to the current generation of game consoles, one of which (the Playstation 5) I recently acquired. Much has been made of these machines, which, including with the Xbox Series X, have been very hard to find at retail price and, to some extent, haven’t offered a great deal of new, exclusive software and have instead offered a slew of “cross-generation” games for both new and old hardware.
Having now acquired one of these consoles, I can say that the “generation leap” that exists between the 8th and 9th generation of games consoles is actually quite significant and is a much bigger leap than between the previous two generations, perhaps on par with the leap between the PS2 and PS3 (6th and 7th generation) in some important ways. Unfortunately, this jump is not obvious when watching the game footage YouTube videos, and this includes progress in terms of gameplay, and I’ll explain why.
I personally remember the leap to the 8th generation (PS4 and Xbox One – Nintendo was and is on its own trajectory) being very underwhelming. Like this year, most games received a cross-gen release, and those cross-gen games didn’t look substantially better on the new consoles. There were reasons for this – the hardware was new, and developers hadn’t optimized for it, and the games were often developed with the previous generation in mind – but in any event, the switch over was more subtle. It went from games mostly running in 720p in the 7th gen, upscaled perhaps, going to 1080p in the 8th gen, while still often dropping to lower resolutions or with framerate dips. Textures were better, to be sure, and lighting, but not in so drastic a way that you felt like you had bought a brand-new, super-powered console. It certainly wasn’t like plugging in your SNES on Christmas morning, 1991.
The PS5 is a whole different ballgame. While much has been made of the rendering technology in the PS5, things like “ray-tracing” and the high-speed SSD, the real jump is in framerates. Running games in 60fps or, in some cases, 120fps is the real game-changer, the real leap forward in fidelity. Most games give you an option: run in “graphics mode” (emphasizing 4k resolution graphics) or in “performance mode” (maintaining high framerate). The thing is, these are slightly misleading, as I can’t think of a bigger jump in graphical fidelity than increasing framerate, especially now that I can essentially see two options side-by-side. You see more of the game world, and you see it with better clarity than at the 30fps that was standard on the 7th and 8th generation of consoles. In fact, sub-30fps had been standard in the console space for so long I had lost hope that the industry would ever ship another 60fps 3D game.
And this graphical improvement? It improves gameplay, big time. Seeing the environment more clearly means you can react better to in-game events, and the games feel much more responsive than they used to. The PS5 (and Xbox, though I haven’t tested it myself) have significant backward compatibility, and this means in many cases, you get that 60fps bump on games from the previous generation, meaning the graphics and gameplay are improved in games you already own. And since I store my PS4 games on another SSD, I get the disappearing load times as well. So “cross-gen” is more “cross-gen” than ever, but not in a bad way.
Kind of like the jump to HD with the 7th generation (the PS3), you have to have the right TV to even properly see what’s possible (and the PS3 came with composite output originally; you could hook it up to your SD CRT). A 120hz 4k TV with HDR really shows just what “high fidelity” means in 2021. Console games have never looked better on such a display, and with the high frame rates, they’ve never felt better. However, like any push in technology, it’s enthusiasts (and I suppose I must admit at this point that I am one) that lead the way – with their wallets open.
I have a feeling a large portion of gamers would prefer to wait – after all, most of the games are still available on current hardware.
Gameplay is still king.
Let’s circle back around to the dichotomy – graphics vs. gameplay.
The truth is, you do not need graphics at all to make a compelling game. There are plenty of text-based games for the Apple II and Commodore 64 that are fun and interesting (try Zork sometime). There are tabletop RPGs that can forego any visuals whatsoever and yet deliver incredible and unique gameplay experiences. In 2021 you can still play MUDs (Multi-user Dungeons, kind of like a text-based proto-MMO) and have a blast playing with other people.
However, these experiences are not the same as a 3D RPG like Oblivion or The Witcher, or the many MMOs out there. It’s the process of the so-called “graphics whores” (sorry, enthusiasts) driving tech forward that allows those sorts of games to exist.
Super Hexagon is an amazing game. Demons’ Souls is an amazing game. Both are challenging, but they are unlike one another, and I think the gaming world would be less without the Souls games, even if I am terrible at them.
And the Demons’ Souls remake? I know I’ve said over and over to avoid remakes, but the new one is really good, perhaps better than the original, and the reason is the graphics, specifically the high framerate, which makes the game more responsive and, frankly, more fun than the ps3 original as a result. I have to eat crow my own position on that one, because this is one of those cases where you actually get something not bad from a remake – a true rarity these days.
So, while I play the devil’s advocate and point out that yes, graphics do matter, don’t forget that the art of the game is its gameplay. Graphics can enable gameplay, but they are no substitute for it. Many studios would be wise to remember that they are making a game, not a movie. At times I feel this basic premise is missing from the minds of some game directors, who clearly want to make movies rather than games, but I am but a lowly author, able to make my stories exist by myself in a medium that suits them.
At the same time, we should also remember that graphics are not graphics per se. They are a vehicle for art. If the artistry is missing, all the 4k ray-tracing in the world won’t make the world interesting to look it. Ultimately, they have to be good enough for all the aspects of the game to work. I think this is one of the reasons the Nintendo Switch remains a great seller, despite being anemic even compared to the PS4. The graphics are good enough for the player to enjoy the art and for the gameplay to be tight and responsive. Not just that, but the hardware is good enough that lots of games from the previous two generations can run on the Switch, including Skyrim and Witcher 3, or Nintendo’s own great Wii U catalog. If you are wanting to buy access to good games, the Switch is a very reasonable way to get them.
Lastly, rendered graphics are not the only technology that opens up possibilities and creates immersive worlds in games. There are input methods, netcode, human feedback, other display technologies, stories and acting (yes, those can be a technology), and the oft-neglected world of music and sound design. But those are for another time. For now, just know that graphics really do matter, but they certainly aren’t the only thing that does.
I am an independent write and musician. You can support me by checking out a book below. I also stream games every Saturday at 1 PM Pacific on my YouTube Channel.