Thoughts on Mobile Touchscreen Gaming

Phantasy Star II Text Entry
A few weeks ago I put out a white paper on mobile shopping applications that pointed out there has been an improvement in mobile application architecture which accompanied a 21% increase in smartphone ownership around the world. The biggest gains in ownership were found among Android and iOS platforms, unsurprisingly. Anyone who spends anytime lurking on LinkedIn or job posting boards will see a huge push for iOS and Android developers. And looking at applications for each of these platforms, in particular game applications, will reveal steady improvements over the years in interface design, multiplayer capability, social network linking, and overall responsiveness. So what does all of this mean for gaming applications, you might ask?

As with any technology advanced capabilities can be a double-edged sword. Improvements in interface design do not necessarily equate to smarter interface design. Usually it means that certain types of controls and interactions can be used due to an increase in familiarity with the platform and the different devices’ technical capabilities. And really this is at the heart of the matter: Improved interface design should support successful gameplay, multiplayer capability, and social network linking. After all, if an application’s interface design is awful for the actions intended, then it doesn’t really matter how fun your game is, how awesome the multiplayer campaigns might be, or how cool your facebook page is going to look with all of the new medals from your game posted it on because you cannot effectively interact with the game.

This begs the question, then, about what constitutes good touchscreen game interface design. And the answer becomes pretty clear when you consider mobile gaming scenarios and issues with touchscreen use in general: Adequate feedback, quick application responsiveness, clear visual affordances, and easy-to-learn gestural interactions. To better understand why these 4 concepts are good answers, let’s consider them one by one.

1. Adequate feedback

The typical definition of adequate feedback tends to be whatever is enough to get a user to realize that their action has had an effect. Touchscreen devices rely heavily on the visual modality, so the first thing that springs to mind is use clear visual feedback. Well, that’s a really nice idea but have you considered that users cannot see through their hands and fingers, and those things may be blocking the screen location where the visual feedback might be occurring? Because of that possibility, consider crossmodal feedback. Crossmodal feedback is just a fancy way of saying, “provide feedback that users can receive via two or more sensory modes.” So, maybe visual feedback in the form of a glow on screen and a specific sound for auditory feedback. Another possibility is tactile feedback for devices that support it. In fact, research conducted within the past 3 years has shown that tactile feedback actually improves users’ awareness that their actions have had an effect on the system (Hoggan et al., 2008).

Now, tactile feedback might be expecting a lot, but take a moment to consider that many touchscreen devices are being used while on the go, so users might be in an environment where they either cannot clearly hear auditory feedback or they might mistake ambient noise for auditory feedback. Combining tactile feedback with visual or auditory feedback can make it clear that an action was actually registered by the system and has had an effect.

2. Quick Application Responsiveness

This sort of goes hand-in-hand with the first point I made about feedback, but this one pertains to all types of interactions in a game regardless of the presence of feedback. Users are very impatient, and when they don’t receive feedback within an acceptable amount of time they tend to do one of two things: Repeat the action until they get some sort of response, or quit. Both of these could have some very frustrating unintended consequences from a gameplay perspective. An argument could be made that mobile gamers are probably a little bit more tech savvy than their non-mobile gamer counterparts, and it’s possible that this is true. However, as a whole mobile technology is one of the more poorly understood technologies out there by users in general–regardless of their ability to use technology. So there is still a very good chance that they will tap wildly on screen or quit a gaming application if they do not receive some sort of application response within a reasonable timeframe.

3. Clear Visual Affordances

Remember what I said earlier about touchscreen devices relying heavily upon the visual modality? Well, the need for clear visual affordances is amplified by that fact in this scenario. Something has to look interactive in order for touchscreen users to even bother with it–much like when users interact with websites or software. Sure, users can just tap away at items on screen to figure out whether or not those items actually do anything, but I have yet to see users who tap willy-nilly on touchscreens in all my years of testing touchscreen devices. They will more likely than not proceed very cautiously, half-paralyzed with fear that tapping on the touchscreen of their device will cause some unexpected reaction that they cannot recover from. And for those who will tap like crazy on a touchscreen, they will likely become frustrated because they will eventually hit something actionable and not know what exactly it was, and then have to either backtrack or quit the application if the end result was not what they wanted. So at the end of the day, make it clear that users can interact with screen items. Here are some guidelines for that:

  • Do not use transparencies on active screen items
  • Do not overlap inactive items over any portion of active items
  • Use strong color cues to signal an active screen item either through the use of saturated colors, or by adequate difference between button and label colors

4. Easy to Learn Gestural Interactions

This concept has two facets to it. First, keep the interactions simple. The more complex an interaction is to perform successfully, the less likely users are to be successful when trying to perform it. This is really important in the context of gaming because a certain interaction may have a time limit, or users may be monitoring other screen locations while trying to perform the needed interaction. Second, limit the amount of awkward contortions, repetitive or prolonged actions, and forceful actions. I realize that’s a lot of “NO” for one guideline, so let me break it down to make it easier to digest.

An awkward contortion could be asking the user to hit two buttons on one side of the screen in close proximity to one another while hitting a button on the opposite side of the screen when you know full well that they need both hands to hold the device up and see the screen. Repetitive or prolonged actions can be problematic because they will induce muscle fatigue in the user. There’s also a bit of a learning curve here for those new to touchscreen gaming with these types of actions. Console games, for example, have limits on things like range of motion and how that translates to on-screen movement. Pushing a joystick to its limit will usually result in fastest movement of characters. Because touchscreen devices lack that same physical boundary, users may continue moving their finger in one direction thinking that they will continue to move faster until they have actually moved their finger off of the control. Finally, forceful actions can cause short-term pain but long-term damage. (This is simple physics. If you move quickly at a touchscreen, your finger has energy that hits the touchscreen with a certain level of force. And touchscreens, being rigid, do not do a very good job of dispersing that force or absorbing it. So that force will disperse through the next available medium: Your flesh and cartilage. Ow.)