Posted On: 2019-04-15
Previously, I wrote about the design choices that are involved when creating an input system for a game. This week, I'd like to talk a bit about some of the technical challenges associated with trying to implement a flexible input system. I will focus on several specific goals that a game developer might intend from the design, and describe what obstacles must be overcome in order to achieve those goals. Along the way, I will reference a couple Unity assets that have tried to solve some of these problems - these are by no means exhaustive, they merely represent the assets that I have looked at while trying to solve these for my own work.
As a designer, one of the goals is to make as intuitive an experience as possible. From the standpoint of navigating menus, this typically means that the primary form of input used in the game (such as keyboard or controller) should be usable, but it also means that the primary way of interacting with menus on that device should also be usable (such as touch on mobile or mouse on the computer.) While some games have the same primary input form as their device (for example, mouse-based PC games) many have some kind of limitation in that regard, either due to game design choices or multi-platform constraints (as an example: consoles typically use controllers for all activities, while a PC port would expect mouse support in menu navigation.) From a design standpoint, this seems like a straight-forward problem to solve: describe a control scheme for each kind of input and allow users to freely switch between different kinds of input.
Unity provides a Navigation System which might, at first glance, appear to meet the needs described above: every element is connected to other elements, and a user with keyboard/controller can use those connections to change which element is currently focused. Additionally, a player using mouse/touch can interact with any of those elements, as they would with any normal PC/mobile application. Unfortunately, there are issues that emerge from trying to support a player being able to seamlessly switch between these:
Controller support for PC or mobile gaming can provide a lot of value for players that are accustomed to using one. Many game types require quick reflexes, and using a familiar input device can help players leverage their existing skills. Unfortunately, there are many obstacles to providing quality controller support - some of which may make the experience worse than having no support whatsoever.
By default, Unity's input system will automatically map controller inputs into standardized input categories, such as "button 1" or "Axis 0". Unfortunately, these mappings are controller and operating-system specific (for example, the A button on the Xbox360 controller is button 0 on Windows but button 16 on Mac .) Fortunately, developers have made assets that work around these limitations (two examples are Rewired and inControl). Any workaround, however, will be limited to a subset of supported controllers - due to the nature of this problem (lack of standardization) it is impossible to create something that works for every possible situation.
Note: Unity's upcoming new input system includes providing its own implementation for these workarounds - including some non-gaming controllers such as artists' tablets.
Controllers disconnecting mid-use is (as I understand it) a significant technical challenge. Incorrect implementation of controller support can cause such an event to lead to a hardware-related crash (such as the "Blue Screen of Death".) Even if the disconnect is handled correctly, a reconnect is typically impossible to distinguish from a new controller being connected (from what I've read on the subject: controllers do not have any unique identifier, so the computer cannot tell which controller connected.) As before, some assets are advertised as solving this, but I personally have not looked much beyond merely researching the issue.
Note: As a developer, the risk of a Blue Screen of Death is the main reason I use a commercial engine: I want to make sure my software won't cause a BSoD, so I rely on the commercial engine to handle low-level implementation details like reading input and writing to the graphics buffer.
Being able to support changing controller mappings is extremely important: both from an accessibility standpoint and from a general device support perspective. The accessibility standpoint is important for reasons that should be reasonably self-evident (differences in motor capability should not impact a player's ability to enjoy the game.) The general device support is important because, as mentioned previously, it is impossible to support all possible controllers, and players that happen to use an unsupported controller should be afforded at least some recourse (creating the mapping for themselves.) Unfortunately, while this seems like an obvious choice, there are, as always, pitfalls that make it more difficult.
As you can see, despite years of developers doing so, implementing input for a game is not yet a "solved" problem. While some aspects are solved by the engine or by third-party assets, many of the things that are essential for a good user experience are still being reinvented by each developer/studio confronting it. Working on this for my prototype has led me to understand just how out-of-scope solving all of this is, and also helped me realize that I must allocate time down the road, to make sure I give working on this for the final game the proper time and focus it deserves. Additionally, the excercise of trying to do all this myself has helped me to better appreciate just how much work goes into something I ordinarly take for granted in the games I play. Hopefully reading this has also helped you as well.