Videogames as a medium for storytelling have often taken cues from movies, and the clearest example of this is the use of cutscenes. Pac-Man is quite often said to be the first game that used cutscenes rather than transitioning directly from level to level with no intermission. After the player beats each stage, it would play a short vignette depicting simple scenes of Pac-Man and ghosts chasing each other.
Whilst these little scenes are quite obviously a long way from how modern cutscenes are used in games, the core concept is the same.
The game takes away control of the character from the player for a sequence to introduce some sort of new information. The duration of these sequences can vary widely – Konami’s Metal Gear Solid series is infamous for having lengthy cutscenes, with Metal Gear Solid 4 clocking it at more than eight hours of cutscenes – and can be used for a wide variety of purposes.
They are used to introduce characters, develop established ones, provide backstory, atmosphere, dialogue and more.
However, despite their ubiquity in modern big budget games, cutscenes are not necessarily the best way to tell a story in a game. There have been many highly acclaimed games that used few cutscenes, instead preferring to allow the player to control the character throughout the whole game.
Half-Life 2 by Valve Software is currently the all time highest scoring game for PC on review aggregation site Metacritic, and it only has one cutscene at each end. Control is rarely taken away from the player for more than a few moments – excepting an on rails sequence towards the end – and much of the background information that would be shown in a cutscene elsewhere is instead shown through scripted events or background details in the environment.
But are Half-Life 2’s unskippable, scripted sequences that different from cutscenes? After all, the player often cannot progress until other characters finish their assigned actions and dialogue – so why not just use traditional cutscenes and be done with it? To get truly unique experiences, we mustfirst look at what makes videogaming unique as a medium for storytelling. Unlike film, where the viewer has no control over the action, or traditional tabletop games, where players actions have verylittle in the way of visual outcomes, videogames provide an unique opportunity to merge interactivity and storytelling. Games like Gone Home, Dear Esther and other games in the so called ‘walking simulator’ genre have been lauded as great examples of the sort of storytelling that can be unique to games.
However, to some gamers, these games are presenting an entirely different problem – although they rarely take control away from the player, they also offer very little in the way of gameplay themselves. Indeed, Dear Esther has no way the player can affect the world around them – the only action that can be taken is to walk along a predetermined path to the end of the game. There is no way to ‘lose,’ no interaction with the environment, just what amounts to a scenic tour with some overlaid narration. So, despite the lack of cutscenes in the game, the almost complete lack of player control and interaction in the first place means that there is little to differentiate it from an admittedly quite protracted cutscene.
As videogames are currently exist, there seems to exist a sort of dichotomy between traditional storytelling and gameplay. For a game to tell a story to a player, there must be some degree of limitation in what the player can do – either a temporary one in the form of a cutscene or scripted sequence, or by limiting the players actions for the course of the game. Perhaps future games will be able to integrate a great deal of player interaction with compelling storytelling. But that won’t be accomplished by taking the players control away and forcing them to watch a short movie instead of letting them play the game.