The Tantalus of Realism

What will happen when human talent plateaus?

 

That does seem to be the worst possible way to kick things off.

Look, there’s been a bit of a proverbial e-peen fencing match of late between the XBox One and the PlayStation 4. This is all par for the course for people who want to convince themselves that the games industry is like Highlander, and “There can be only one!”. We know this is not true, of course; there’s plenty of room for all, if 75 million sales of both the PlayStation 3 and XBox 360 are any indication and of course the 100 million or so Wii sales. There’s a lot being discussed about how architecturally different it will be utilising the memory differences between the PlayStation 4 – which uses 8GB of GDDR5 RAM, primarily used for graphical computations but will also be sharing the workload of the firmware and system settings – and the XBox One, which is set to throw in 8GB of DDR3, which is primarily better at operational tasking in terms of firmware ad multitasking. The discussion has been about how more can be squeezed from the XBox One CPU – a lot more than first expected – and how it compares to the PlayStation 4.

A lot of talk has been of “theoretical limits”. Numbers and figures so dazzling high you’re supposed to be impressed. But… I’m not.

Arguably the first big problem Microsoft and indeed Sony have is not each other; but Nintendo, and the Wii U. And a lot of this has to do with what most people are expecting from this generation – visual clarity. By such a thing, I mean 1080p High Definition visuals running a crisp and consistent 60 frames per second. It’s the proverbial holy grail of video gaming now, and for Nintendo – on a weaker machine – came the promise at E3 that all its own in-house games would adhere to this standard.

You’d think that was easy; that if the Wii U can do it then surely the PlayStation 4 and the XBox One would make mincemeat of such a task. But it’s not quite that simple. For Microsoft’s in-house software and for Sony’s in-house software, I do expect 1080/60 to be the default standard. I really do. But for the likes of Konami, Capcom, Activision, EA and other various third parties, I’m not entirely sure. A lot of this comes down to the very nature of third party support; the need to draw a compromise between two distinctly different platforms. And they ARE different. If there was any similarity you’d see a patent fight emerging, and there’s no question that the nature of the memory differences between the two systems will play a very important part in how each employs its games.  But to a third party, this is a nightmare. They have to look at the bottom line; two platforms, possibly even three, all utilising very different hardware sets. No doubt by the end of the generation people will have found a better way of doing things; but in the short term, it’s unlikely you’ll see a huge consistency in this.

thelastofusrealism

The Last Of Us. Yes, it looks THAT good.

See, the problem is that human talent has a limit. We can sit and discuss the voltage efficiency of GDDR5, or the ESRAM of the XBox One CPU. We could talk about flops until I flop out all over the floor. And cycle holes and Hertz and lots of other things, but all of that technical jargon is pretty much missing the point and getting close to acting like a Bonobos Chimpanzee. I’m not that interested in the numbers – nice as they are. It sounds very lovely to hear all these developers talk about their passions, but arguably it hides that this coming generation is more of a sideways step than a jump up.

We’re shifting off the old track; the nature of hardware is changing. In the past, most consoles and therefore most video games have been very CPU-driven affairs. With limited memory on board and graphics cards that are even when they hit the market brand-new a little on the low-end in terms of specs, a lot of how games have been made is by getting as much from the CPU as is humanly possible. And if you take a look at the PS3, and The Last of Us in particular, you’ll see the fruits of those labours. It’s bloody impressive stuff. But now, CPU’s are getting a bit more costly and the GPU is becoming a cheaper and more efficient entity. Couple this with cheaper memory costs all round, and what we have is the recipe for games consoles to finally end up architecturally in the same ballpark as a PC.

(Of course, this means the real winners of the new wave of games consoles will be the PC users; who will no longer end up with games reliant on CPU and tethered by the limitations of archaic console hardware. The similarities can allow for a higher degree of natural scaling than ever before. PC games will simply function better for the glorious PC Master Gaming Race…)

But so too are we reaching the theoretical limitations of what is visually possible too.

Again, look at The Last of Us. It’s a phenomenal achievement, and that takes talent. A huge amount of talent, and arguably it’s hard to see how they will advance on that in the coming years with the PlayStation 4. Indeed, the argument may be that you can’t. We’re rushing for such photorealism and such visual polish that ultimately, we may end up caught between Uncanny Valley and Hyper-Realism. With visuals that are just too perfect, just too obviously artificial, and with AI routines and design layouts that are obviously man-made; ordered, placed, forced, unnatural to the human eye. With talk of more polygons, more points of reference and better motion-capture techniques allowing for a more natural movement in a game world, the danger of the chase towards the realistic is that we miss the line, and career straight into something from which there is no escape and no return.

The best way to explain this is to think of Grand Theft Auto 5. It’s artistic style is obviously video-game. You know it isn’t real. Your eye and your brain can tell that it isn’t real. It can tell that all has been digitally placed, so whatever you do in the game world – there is a detachment. You are enjoying entertainment, and your eye and your brain are processing the visuals and you are having fun. Now; imagine that those visuals are not so stylised. Imagine for a moment that they are like those in The Last Of Us; very realistic and detailed. Running at 1080p. 60FPS. Everything looks natural, moves naturally and feels right. And then you run into someone in a car.

Hyperrealism would denote that a lot of people would feel… remorse. Guilt. Anguish, even. If the eye and the brain end up finding it structurally difficult to distinguish the fantasy from the reality, then you could end up with games like Grand Theft Auto becoming deeply unpopular. Watching CSI: Vegas and one of those autopsies is not the same as seeing a man stabbed to death on the news, for example; in CSI, it’s obviously faked. Everything is so staged and shot to detract from any notion of realism; and if in doubt, hey! Make the corpse talk! You lift the mood and distract from the revulsion that would otherwise happen. Seeing a man stabbed to death on the news – as the UK ‘enjoyed’ some weeks ago now – is very different. Seeing the reality, seeing the blood, it’s different. And even if you are not there, your brain knows inherently this is wrong. This is horrible. This is shocking. This is disturbing.

residentevil6realism

Resident Evil 6. Also looks fantastic.

The brain can make such distinctions. Hyperrealism occurs when the brain finds it hard to distinguish between the two, and so would feel remorse and regret at something in an artificial game world. Arguably, hyperrealism is the WORST place to go for videogames. When so many involve guns, violence and death to carry them as vehicles, the notion that they would chase after a level of visual fidelity that would make all those convenient vehicles for entertainment a big turn-off for the average consumer would be professional suicide.

The other bad place is Uncanny Valley.

I’ve mentioned it before but Uncanny Valley is when something that looks real becomes unreal, and the connection and illusion is lost. Not in the blurring of the lines in the way hyperrealism would act upon us, but in a complete disassociation of what is on the screen. It’s the point when the realistic unreality on the screen jars with something that reminds us of its inherent unreal origins. And this is a more likely scenario when you consider that by and large, AI routines and storage limitations have meant that most games today don’t do much that is “realistic”. If you were to hide in a box in the real world, most normal people would probably check the boxes in the corner and find you. It’s just how we think. For an AI routine, unless it sees you, or sees “movement” in it’s set field of vision, your box is to all intents and purposes invisible. The guard can walk right by you. Indeed, in some stealth based games, they can walk right by you as you stand beside them and they don’t bat an eyelid. Their set method of being is artificial. How they act, operate and do things is all set by a few lines of code knocked together by someone on a late-night bender, probably. It destroys the illusion.

The Last of Us had this for me many times over; it did a good job masking it, but ultimately sometimes the AI would see us before I’d even moved into a new area. Sometimes I could walk right by a human character and do a little song and dance number whilst Ellie bumped and grinded on their leg and they wouldn’t have been any the wiser. Admittedly, this is funny; but Joel and Ellie are masterpieces of realistic design, so much so that everyone else that was a copy of another enemy, everything else that was a little too sharp and clean (like the University foundations), detracted me from the immersion that was there. No doubt that The Last Of Us is a cracking piece of design work; but it’s sadly the way of pushing things at times. It can be very easy to cross that line.

So;
Hyperrealism is when the human brain struggles to make the distinction between the fantasy and the reality.
Uncanney Valley is when the brain does make the distinction between what is real and not real in terms of how we relate to it. Often with revulsion.

If any of this seems a bit far-fetched; it really isn’t. For some, progress isn’t defined by critical success or even commercial success; it’s by pushing for the theoretical limits, by impressing their peers. It’s by trying to crank that dial up to eleven, by really trying to do something amazing and to stand out from the crowd. And ultimately, I think the realisation is beginning to hit home that now, theoretical limits may not be attainable by human hands. Hardware and software is as good as the person wielding the code (In my case, for example, it’s like giving a todder a chainsaw), and what is theoretically possible… may just be out of reach. We’ll have created something with potential beyond the scope of attainability; and the process to ever reach for that goal, like Tantalus, will be torture for many in the industry. Ever reaching, never quite getting there. That the tireless drive for realism will have driven them to the point where realism just isn’t good enough, where the reality of attaining such a notion will have become almost a hollow victory.

Arguably, this may be a good thing because if the market cannot realistically attain the nirvana of breaching the theoretical limitation, then there are plenty of other challenges out there.

roguelegacy

Rogue Legacy. Not realistic. Tons and tons of fun.

There are genres that have been underutilised for years; franchises that gathered dust in the relentless pursuit of “realism”. This is somewhat where the ‘Indie’ scene has taken off; the diversity of content in that space has become nothing short of spectacular. This week, I’ve played Rogue Legacy – a cross between Castlevania and the classic Rogue/randomly-generated formula. I’ve enjoyed Gunpoint; a sort of methodical stealth puzzle game. I still love Recettear, a JRPG-come-RPG Shop Simulator. Lone Survivor was an old-school take on the Survival Horror formula in a sort of classical Clock Tower way, Terraria indulged my creativity in ways I was not prepared for and Cave Story+ still remains one of my favourite little platform adventures (also a bit Metroidvania in style).

When power is not the intent or the goal, creativity can perform a huge deal more than any $60 FPS. Most commercial games have chased the same goals and ended up, sadly, more or less bleeding into each other; there’s no artistic or individual merit to some of them. And that doesn’t mean games have to look bad; far from it. Trine 2 is gorgeous to behold. The Swapper visually striking in every regard. And the forthcoming Contrast; a spectacular Noir-themed shadow-based platform adventure. None of these games are ugly – far from it, in fact. And not only are they visually more appealing; they play differently, offering new challenges and ideas, creating new and exciting horizons.

And for me, this is where I would hope the market is headed. If it cannot chase the limits of realism any longer, then excuse me for saying this but, “£$%&ing HOORAY!” 

Because I’m no longer finding myself impressed by visual spectacle. Even when it’s as huge a leap as The Last Of Us, I just find myself inevitably drawn to its failings more than what it actually manages to attain – and it attains a LOT, don’t get me wrong, it is a massive success. But I suppose this is Uncanny Valley at its finest; the flaws and the artificial spots like a line that is too sharp, or a spot of tearing noticed hidden behind a boiler, become all the more obvious and hard to ignore.

You may find that odd, but it’s the fatigue I guess of years of people chasing the same end goal; that eventually these flaws and faults stand out ever more startling and striking then ever before, especially if you’re trying to hide them. The more real things look, the more my eye is looking for a reminder of its unreality. My head is always looking for the reminder that this isn’t real. It’s the failsafe. It’s a mental construct designed to stop me from becoming too emotionally invested.

When we’re not looking for that, and instead enjoying the diversity of genres and content, then I do think this will somewhat ease off in people like me. I don’t criticise Terraria for it’s 16-bit aesthetic. Nor do I mock Recettear for it’s sprite-based antics. Because they are asking me to be impressed by more than visuals; when we’re not judging based on how something looks, then we can be more analytically critical of everything that lies underneath the surface. Because hey, Resident Evil 6 looked amazing. It really did.

Judging games on what they can do visually, or what they could do visually, is becoming a rapidly failing argument. The era when all a big-budget video game had to do was look amazing is kind of over, or at least, it is coming to an abrupt end. All that many developers can do when they hit that point where they can go no further, when their talent limit has been reached, is look back. We’ve come a very long way in a very short space of time; look at a game from 2003, compared to a game from today, and we’re talking large leaps. But the leaps are getting smaller. The jumps are getting closer together. There’s nothing to be impressed by in terms of graphical fidelity.

Hopefully, in an ideal world, hitting this point will result in a diversification; games no longer pushing for the limits with massive budgets, but an industry that once again gets back to creativity. That gets itself back to something more in line with what we expect. Because it’s true, for all the console sales, each game has a pretty small market. Perhaps this is the genius of the Wii U; 1080p and 60FPS standard? Sorted. And it’s Nintendo. Expecting them to chase super-realism is like expecting a dead snail to win the Olympic 100m sprint. It’s a ridiculous notion.

They may yet do; there’s an odd shine to Mario Kart 8 so far I find a little hard to swallow but it’s early days yet to tell if this is art direction or a side effect of 1080/60. Time will tell if this shift will actually do us any good, or just further muddy already murky waters. But for now, there’s a lot of talk of numbers, limits and what people want to do. It’s easy to forget that at the end of it, we’re meant to want to buy these products. And they can keep reaching for that impossible goal all they want. We’re going to notice it more and more. If the game underneath sucks, the visual disconnect is going to be all the more alarming and disjointed.

One can only hope that if this generation is meant to last ten years, and “may never fully utilise the consoles power”, that they stop chasing those numbers and look more at the things underneath; the AI algoritms, the gameplay, the structural design, the genre, the tone and yes, even the artistic direction. If the end result is that we may never max these things out; then perhaps people shouldn’t? Perhaps it’s time to accept that such a destination isn’t a good thing anyway, and that there are plenty of perfectly wonderful places on the road ahead that need some attention.

We’ll all get better games. And hopefully, they’ll all end up looking very different as well.

Which would be a win-win all round, right?

You can leave a response, or trackback from your own site.

Leave a Reply

Powered by WordPress