With the Nintendo Switch event scheduled for Thursday (with a Treehouse Stream due on Friday to reveal more games), the focus has shifted back to a stereotypical bugbear that has haunted the industry for years – graphics. Namely, many seem to be inherently concerned that the Nintendo Switch doesn’t have the ability to run super-graphical games, or at least would require some downgrade in order to fit on its reportedly HD-Ready 6 ½ inch screen. A vocal minority – and the press – have once again pondered if this will be ‘the death of Nintendo’.
(Of course, this ignores a handheld market where traditionally, the technologically superior devices have fallen to Nintendo’s modestly-powered, cheaper to make and produce hardware. It also ignores the reality that the PlayStation 4 is the first time in twenty years the technologically superior device has led the market – a single instance in decades of evidence suggesting otherwise and ignoring the complete failure of Nintendo and Microsoft to compete during the last three to four years – but hey, don’t let the facts get in the way of your argument, right?)
My take on the problem of graphics is a little more tangential.
Graphics have become the byword in hardware advancement largely because each generation, since the early 80’s, has brought with it significant leaps in graphical fidelity. From the ZX Spectrum, Atari and Amstrad devices and huge blocky representations of visuals, through to the 8-bit era where games started to see things like parallax scrolling, through to 16-bit where two-dimensional sprites became beautifully detailed and we saw the birth of early three-dimensional gaming, through to the early 3D games like Tomb Raider, Super Mario 64, Ocarina of Time and Final Fantasy 7-9, and onwards into Gen 6, where we saw more visual effects become commonplace and Gen-7 and Gen-8 where we saw leaps into slick, high-definition graphics approaching and sometimes achieving photo-realism. Each generational leap has seen huge jumps in visual fidelity, even on the weaker devices, and so we have become conditioned in a sense to see graphical prowess (and therefore hardware power) as the defining characteristic of generational leaps.
What we often forget in that is the issue of “Diminishing Returns”.
The law of diminishing returns states that in all productive processes, adding more of one factor of production, while holding all others constant (“ceteris paribus“, or “all things equal”), will at some point yield lower returns per unit of investment. A secondary, later addition, added an additional modifier to this called a “negative return”, whereby additional factors of investment end up yielding lesser returns. Here’s a basic chart on how that looks.
It’s a very simple premise; indeed, gamers are already inherently familiar with the concept – it is how the vast majority of RPG titles and mechanics work. There is a period where experience makes levelling plentiful and fast, but as things progress – more experience is required per level, which means each level takes longer, eventually tailing off to a point where there is no longer any return for your experience points (see Dark Souls, where Attunement gives additional magic slots per point invested but nothing after 50, even though you can technically still take this stat all the way up to 99 – you’d gain nothing for those additional 49 levels!).
At its most basic, diminishing returns is simply the ideal that investment has a hard cap; beyond that, it is meaningless and potentially dangerous to your bottom line, as you try in vain to sell more units in order to pay off your creditors and staff.
The same is arguably true of graphics. In the 1980’s, when graphics were simple, you could have a team of five or six people and a budget of $20,000-$40,000. During the 16-bit era of 1989-1995, a budget of $100,000-$250,000 and a team of twelve to twenty-four people could make a game. During the 32/64-bit era, budgets rose to $500,000-$1.5 million, and the teams needed fifty people or more. After that, budgets ballooned to $5 million – requiring a hundred or so people – to $10 million and now an average game can cost between $25 and $50 million to make, requiring a team of hundreds of people and sometimes, even thousands of people, all because graphical prowess demands more work. Making things more realistic requires more people, from animations to modelling and texture work and a lot more besides (particularly with Motion Capture being all the rage), as the drive to make use of that graphical prowess increases development time (and therefore wage expense).
The end result is the market you see now. In 2008, it was cited that one in five games turned a profit. In 2016, it was suggested that had become one in ten games. A single console generation had passed, and the increasing expenses involved – not to mention the competition in the form of a deluge of high-quality indie games – had decreased the chances your project would be profitable.
Admittedly, games sales are only one source of revenue; but it used to be that was the only source you needed. Now, DLC and Microtransactions are the norm to make up for that shortfall in profits. Merchandising has been pushed into overdrive, leading to the hilarious “Ub-Iconic” joke, where every new character and costume they made was described as ‘iconic’, in order to justify selling baseball caps and bomber jackets emblazoned with logos to make a few dollars more. And chances are the higher budgets go, the more developers and publishers need to find additional revenue streams because their games aren’t selling in the same ballpark that a handful of outliers do.
This is the curse of graphics and power; each generation has become increasingly more expensive, and Gen-7 and Gen-8 have been the point at which more and more are seeing the Negative Return, unable to reduce the costs involved and unable to shift more units. Who remembers when Square-Enix was disappointed with 3.4 million unit sales of Tomb Raider 2013? The tipping point happened years ago, and we’re shocked and surprised that the last three years have seen a slow trickle of games and a string of cancelled projects?
(As I write, it is being reported Scalebound – the Platinum Games project for the XBox One – has been cancelled despite the project being mostly complete!)
Gamers are familiar with an old quote; “That which can heal, in large enough doses, can also kill.” It’s a quip used in Assassin’s Creed 2, and it’s arguably much older than that. Too much of any single thing is bad for you – we know this, be it graphics (see The Order: 1886) or water (yes, Water Poisoning is an actual thing). The point at which the race for graphics and hardware became increasingly dangerous for the games industry happened during Gen-7, when more projects began to go under and more developers began shifting towards markets with less focus on graphical fidelity – see; smartphones. Today, it’s becoming increasingly toxic – and debilitating, as consumer figures suggest the home console market is suffering its worst generational performance in twenty years.
Technology will always march onwards; that is an inevitable, and there is always going to be an audience for power (see the PC Gaming world, where the power at your disposal is limited often only by how much money you are prepared to throw at your hardware). But as more gamers and more successful game projects come from the budget and mid-tier markets, talking about ‘power’ and ‘graphics’ is to only focus on one extreme.
Ask yourself how the games industry grew from 1995/1996, with the advent of the PlayStation. It grew because Sony attracted in people who wouldn’t have ordinarily played video games, by making them cool and grown-up (and Nintendo ousted competition in the handheld space, as it has done now for twenty years, with Pokémon). The PlayStation 2 success – driven by technological trends, having a DVD player and backwards compatibility built into the device at a competitive enough price even those who only sought a DVD player were tempted by it. The Nintendo Wii, which brought in a crowd who were taken in by its motion controls, breezy visuals and low price point.
And we’re shocked that after twenty years of courting a wider market, the generation where the industry decided to court graphics-hungry gamers turns out to be the weakest in twenty years?
Whatever the Nintendo Switch can or cannot do, focusing on what it can do graphically is a meaningless endeavour. What we should be asking is – can the Nintendo Switch create a renewed interest in home and/or handheld consoles? Can it bring back those who lapsed, for whatever reason, into the gaming sphere once more? Can it provide a shot in the arm for hardware on a mobile level – be that in terms of mobile CPU, GPU or battery life? Can Nintendo’s new machine carve out its own market in a place where PC Gaming and Smartphones sit at polar ends of the spectrum, finding a new middle ground in which to flourish?
More than that, with arguably lesser emphasis on graphics – though Zelda: Breath of the Wild, Skyrim and talk that the next Assassin’s Creed will all be released on the Switch (meaning power and graphics may not be nearly the problem as suggested) – can the Switch provide the mid-tier and budget gaming circles with a platform they can be happy with? Can the games themselves be stronger, deeper, broader?
Right now, there is a vocal portion loudly fixated on graphics and power at the expense of almost everything else. And I get it. Developers should always have the right to make the game they want; and for that, they can use the PS4 “Pro” or the XBox Scorpio, and their 4K prowess. Options – they’re important, and that keeps the market interesting. After all, no-one seems to care the 3DS, the weakest hardware in the console market right now, has sold the most units (and, unsurprisingly, the most software). Variety is the spice of life, and a weaker machine can still be surprisingly successful when marketed right and when it has the right kind of software.
The only thing we can do is wait and see. It’s never a wise move to discuss the success or failure of hardware before it has had a chance to fly on the market. The Switch is an unknown, an unproven concept, a new thing, a different thing. The rulebook may be similar, but it may not be the same word-for-word.
And hey, if Pokémon has shown us anything – it’s that games still matter. If Nintendo can do a better job with games than it did during four years of the Wii U, whatever its power, I suspect they’ll be fine. Maybe not the outrageous super-success of the DS, Wii or PlayStation 2. But it should equal – or even better – 3DS sales in an ideal situation. And Nintendo will be around for years to come, still annoying a portion of the market screaming out how Nintendo is still doomed because it won’t compete on power and graphics.
Which is as fitting a punishment as any, I suppose.