The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications [2007]

Date: September, 2007

Source: Sears, A.,& Jacko, J. A. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications (pp. 755-756). 2nd ed. CRC Press.

Type of Formative Studies

There are essentially three types of formative studies that are conducted during development: (a) the critical facet test, (b) the initial experience test, and (c) the extended Playtest. We also run tests that do not easily fit into any of these categories, including subtle variations on the above, a few cases of large sample observationally based studies, and studies that focus on games from a more conceptual level. As in the usability section, for clarity of presentation, each technique will be discussed separately, followed by a case study. Each case study will only contain information pertinent to a specific technique, thus examples may be taken from a larger Playtest.

Critical facet Playtest. Games often take the form of repeating a core experience within an array of different context and constraints. A driving game is always about aiming an object that is hurtling through space. The critical facet Playtest focuses directly on that core experience and making sure that it is fun. While the core experience can often be assessed in usability testing, Playtesting is necessary to assess attitudes and perceptions about the core experience.

The following example demonstrates how a critical facet test was applied to the critical facets of Oddworld: Munch’s Oddysee (2001), an Xbox game. Oddworld: Munch’s Oddysee is a platform/adventure game that allows you to switch back and forth between two main characters as they proceed through the increasingly difficult dangers of Oddworld on a quest to save Munch’s species from extinction. The core gameplay is explorating the realm by running, jumping, and swimming through the environment. In this case, there were concerns about the user’s visual perspective—which we typically call the camera. Does the camera support exploration of the environment?

Case study: Munch’s Oddysee, Camera
. Previous usability testing with the Oddworld: Munch’s Oddysee (2001) determined that while some users indicated dissatisfaction with the behavior of the camera, other participants chose not to mention it at all while engaged in the open-ended usability tasks. The camera’s behavior was programmed to create maximal cinematic effects (e.g., sometimes zooming out to show the size of an area) and attempt to enhance gameplay. The camera would often show a specific view with the intent of showing you what was behind the next door or on the other side of the wall while still keeping the main character in view. While many users liked the look, style, and behavior of the camera, users often wanted more control over the behavior of the camera. Indeed some participants would actively say things such as, “That looks really cool right now [after the camera had done something visually interesting] but I want the camera back pointing this way now.” Because feedback from the usability lab contained both positive and negative feedback, the development team did not see the usability data as conclusive. Further, changing the camera would be a major cost to the game in terms of redesign and redevelopment time.

After having played the game for an hour, 25 participants were asked for general perceptions of the game. More specific questions followed. Questions related to the camera were asked in the latter portion of the questionnaire because previous experience in the usability lab had shown that merely mentioning the camera as part of a task would often cause participants previously silent on the subject to vociferously criticize aspects of the camera’s behavior. With the knowledge that we wanted to factor out any priming-related effects, two analyses were conducted.

The first analysis was based on the response to the questions related to the behavior of the camera itself. Nearly half of the participants (46%) indicated that the camera did not give them enough flexibility of control. The second analysis went back through individual responses to determine the attitudes of those participants who mentioned the camera before the survey first broached the subject. Forty-three percent of the participants were found to have mentioned the camera in a negative fashion prior to being asked specifically about the camera questions.

Based on this data and other anecdotal evidence, the development team chose to give the players more flexibility of camera control. The result was more frequent use of a camera behavior we termed a third-person follow camera. The behavior of this camera had the double advantage of being more easily controlled by users and of conforming to a set of behaviors more often expected by users. It maintained focus on the main character without major adjustments to point of view (e.g., to “look over a wall” or “behind a door”). Other camera behaviors (e.g., still camera behaviors that pan with the character) are still a part of the game but have been localized to areas where these alternative camera behaviors can only create an advantage for the user.