• Towards a Domain Independent Framework for Input Device Testing and Comparison in Gaming

    Clark Rinker
    Western Washington University Computer Science
    rinkerc@students.wwu.edu

  • Abstract

    Augmented and virtual reality hardware has not reached their true potential due to our limited understanding about user’s experience surrounding these 1. To bridge this gap, we propose a framework for examining input devices for use with augmented and virtual reality hardware. Our contribution is in three parts. First, we provide a categorization of the types of user interfaces and content that one can encounter in virtual and augmented applications. Second, we propose the design of an experimental software testbed for evaluation of an input device’s potential for use in virtual and augmented reality applications. Finally, we propose an experimental protocol, which was informally tested using a pilot study.

  • Introduction

    While a variety of commercial input devices are available to users today, none are as predominant as the keyboard and mouse on personal computers2, gamepads on home consoles, and touchscreens on mobile devices3. These input devices have emerged as the most effective way to interact with their respective platforms regarding common task such as navigation, text input, and gaming. As consumer virtual and augmented reality devices become available in the imminent future, users will face the challenging task of familiarizing themselves interacting with depth in immersive 3d experiences. However, little is known about the efficacy of these input devices that will require users to make a transition from 2D screens to 3D headsets. The novel interactions may or may not be embraced by the users due to unfamiliarity of interaction in a 3D environment. Alternately, we may discover that these alternative input methods 4, such as remotes or gesture detection, can prove more effective than currently available ubiquitous methods, or perhaps that entirely new input devices will need to be developed to satisfy user needs.

    In our research we aim to learn about users experience with augmented reality headsets. By conducting an informal pilot study, we discovered that traditional game engine IDEs such as Unity make it difficult to interchange input devices for the purpose of testing augmented content. Too often these systems are designed for abstracting input across personal computers, consoles, and mobile devices, therefore rendering them inadequate for use with non-standard input devices. This inadequacy led to the development of a software testbed which can map a variety of input devices to an action, render 2d, 3d, virtual and augmented content, among other uncommon features. With a proper testbed we can empirically study the relative strengths and weaknesses of the available input devices for virtual and augmented reality environment.

    In addition, to learn about user experience with these input devices, we developed an experimental protocol focusing on understanding user’s comfort and aptitude with an input device while performing gamified tasks. These tasks reflect the types of interactions a user may have in 2d, 3d, virtual and augmented environment. We conducted a preliminary study with a small group of users ranging from proficient to novice in various computer and gaming tasks. Findings from this pilot study will help us to prepare for an in-depth study with larger sample size, which will enable us to guide the design of next generation interaction technique.

  • Background

    While a wealth of work has been created since Steve Mann’s seminal contribution to wearable computing the advances in user input, sensor fidelity, rendering technology, and content generation has placed the technology on the cusp of consumer adoption. Due to the tremendous technological undertaking of creating the latter there has yet to be a dominant new user input device for augmented reality. Understand these other technologies in integral for designing an AR input device.

  • Pilot Study

    Out pilot study consisted of 11 persons (3 female) testing a range of input devices across multiple domains to encapsulate the above mentioned categories. All participants were asked to complete a testset of game skills using keyboard and mouse, the Xbox Gamepad, the Steam Controller, and the Leap Motion, our Augmented Reality standin device. Our testset included a blockworld navigated in the first and third person, a racing simulator, a fighter plane simulator, and an on rails flying game in the vein of the Nintendo classic Starfox. Participants were asked to complete the four games with each of the input device, answering a semi-structured questionnaire about their experience.

    For each device we asked users questions regarding the “dimensionality” of the input device. For instance a D-Pad would be effective for playing a racing game but would be ineffective for piloting a plane, where roll, pitch, and yaw require an additional axis of input to control. Users were asked about their ability to navigate their avatar within the test game they were playing but also how effective they believed the device would generalize to other tasks, such as text input or menu navigation. Finally we asked users to rate the haptic feedback of the device, including the aesthetic feel of the controller (using a control stick vs using a mouse) and servo driven vibration or resistance (“Rumble”) they noticed. The question of haptics was of particular importance to us as we postulate a lack of touch feedback to be a barrier in creating immersive AR experiences. To provide differential feedback between participants we also collected data on participants socioeconomic status, age, and experience with gaming.

  • Results

    Matching demographic expectations, men were more likely to respond yes to “I am a gamer,” or “I am relatively familiar” when asked about their gaming experience. Similarly, men responded as being more proficient using an Xbox Controller or Keyboard and mouse than their female counterparts. More interesting results were found when using the Steam Controller and leap motion. While the Steam Controller resembles a traditional gamepad, it’s haptic touchpads instead of dual control sticks proved foreign for the more avid gamers, and female users performed better than their male counterparts. Even more interesting were our observations of the participants use of the Leap Motion controller: without the traditional haptic feedback of the controllers, the more experienced “gamer” identifying men found using the LEAP motion device more frustrating and performed worse at the on rails piloting game, taking longer to complete it successfully and achieving a lower score.

    We postulate from our pilot study that the transition to using next generation input devices on virtual and augmented reality headsets will be more challenging for those with prior gaming experience than for those without. As these new devices will rely on more natural input methods such as hand gestures, skilled controller users (with traditional input devices) will have to unlearn the mental mappings they have created to be proficient with earlier devices. Conversely, those who spent less time focusing on mastering game controllers will find it easier to transition into using these next generation input devices.

  • Future Work

    While our pilot study offered insight into how users with varying proficiencies interact with familiar and foreign devices, an in-depth study is necessary to get a better understand of how demographics and personal experiences affect one’s ability to master the new generation input devices. Additionally, while collecting data for our study, we were stymied by the brittleness of different game engine input, which made it extremely challenging to generalize across console, PC, and mobile input. Currently, we are working on an input testbed that will aid in collecting data about a variety of input devices that can be used to examine user experience with traditional and novel input devices.

  • Content Classification

    We describe four basic categories of experiences, dealing with the type of content, and the way it is experienced by the user5. We describe their general definitions below.

    Classic 2d experience focuses on two dimensional content - images and text - displayed on a flat surface like a computer screen. Examples include window managers, terminals, or 2d games.
    Classic 3d experience focuses on three dimensional content - meshes and interactive worlds - displayed on a flat surface, such as a computer screen. Examples include 3d games, interactive globes, and CAD (Computer Aided Design) systems. Interaction is often necessary for experiencing the 3d content.
    2d content displayed using a stereoscopic system can be place as a virtual screen. Images can be viewed from the sides or from behind. Examples include HUDs (Heads up Displays) and virtual screens.
    3d content displayed using a stereoscopic system can become immersive. For example, immersive games, telepresence (viewing or projecting), and interactive virtual models.

  • Sensor classification.

    With the exception of trivial HUD AR experiences ala Google Glass sensor telemetry is integral to augmented reality on two fronts. Firstly precise measurements of the position and orientation of the head mounted display must be reconciled with the position and orientation of a virtual camera in a 3D rendered in order for augmented content to appear accurate when overlaid on the real world. Conversely the position and orientation of physical objects within a scene must be known in order to provide context to augmented content. Finally the degree of freedom in which a user can navigate an augmented world stipulates the types of interactions a user can experience. DOF in AR is inherently bounded by sensor technology.

  • Sensor criteria

    Sensor Fidelity and precision. (Error rate)

    As an AR simulation runs continuously sensors must be resistant to error accumulation. 1 For example the Inertial Measurement Unit in a mobile device is quite effective at detecting rotation events (switching the device from portrait to landscape mode,) but prolonged use of the accelerometer (even over one second interval!) would lead to an inaccurate measurement of position. Algorithmic compensation for error rate is the domain of Signal Processing, employing techniques such as a Kalman Filter

    Sensor fidelity is governed by sampling rate and resolution. Ideal sampling rate would be twice the rendering rate of the simulation (~6-7ms at 60 FPS)as to always have a new sample for each frame. Ideal sensor resolution would be a sub-pixel fidelity at a given rendering distance as to avoid visual artifacts. As examples IMUs that sample at 120hz are sufficient for current rendering technology (pre submillisecond render technologies like DX12/Vulkan) whereas MIT’s Chronos which uses Wifi time of flight to measure the location of a mobile device to 4cm would produce visual artifacts when rendering short range content.

    “Relative” vs “global” positioning

    Sensors can be classified by whether they give measurement in reference to themselves, to their scene, or to the set of all scenes. For example an IMU may give the local rotation of a head mounted device, a beacon may give location relative to a remapped room, and GPS would give position relative to the earth (where all successful AR simulations have taken place.) How these measurements are interpreted and combined in the simulation layer affects the types of AR content which can be created.

  • Sensor survey:

    Sensors in augmented reality are either mounted on device, giving position/orientation relative to scene features, or scene mounted, providing position of the HMD relative to fixed room features. Both techniques have advantages and drawbacks, and composing sensors will lead to a more immersive AR experiences. The following table is an extensive, though not exhaustive list of sensors for use in augmented reality.











































    Sensor Location Pros Cons
    Inertial Measurement Unit Head Mounted Accurate Orientationr Inaccurate Positionr
    RGB Camera Head Mounted Accurate Orientation, Positionr Tracking locked to marker/feature visibility. Precision scales computation quadratically
    Depth Camera Head Mounted Feature detection: surface extraction Range Limitations
    Lidar Head Mounted Feature detection: surface extraction High Costr
    IR Pylons Scene Mounted Accurate position Limited movementr
  • Degrees of freedom from sensors

    When considering user experiences in Augmented Reality it is important the degree of freedom with which a user can explore the augmented scene. A perfect system would of course allow augmented content to be placed anywhere and viewed from any angle that physical content could exist, creating a perfectly immersive experience. While obviously impossible the optimal system sets the criteria by which users will judge augmented experiences.

    No Sensors

    With no position and no orientation of our virtual camera we can only create augmented HUDs, like those used in Google Glass. While this allows the user to keep their vision focused on their task while viewing context information HUDs provide only the tip of the iceberg for Augmented Reality. It is important also to note that there is experimental evidence that HUDs can actually hinder one’s ability to focus on tasks such as piloting a vehicle.

    Camera Tracking


    With the exception of specialized devices like the Google Glass or Daqari’s Smart Helmet the vast majority of deployed AR applications use smartphones with a monocular camera. These applications work by superimposing 3D content over a marker or feature detected by the camera. With camera marker tracking a user’s position is limited to the half sphere in which the camera can see the marker. This sphere’s radius is the minimum distance with which the camera can detect the marker or feature, measured with a 480p camera and AR Toolkit to be between 1-2.5 meters. Increasing the resolution of the camera can increase the radius of this half sphere, at the cost of quadratically growing the cost of computation.

    IMUs


    As discussed earlier Inertial Measurement Units can accurately measure orientation but not position. This technique has been used by the original Oculus development kits and mobile VR experiences like Google Cardboard. IMUs provide the inverse of the camera tracking experience: the user’s position is locked to the center of a sphere. Their orientation is unlocked, allowing them to examine any point on the inside of the sphere. 360 video or VR movie theater applications effectively use IMUs for their experience, however the inability to move limit’s the experience’s immersion.

    Combining IMUs and camera tracking provides an interesting hybrid case of Augmented Reality. A user in this system can translate their position by maintaining camera tracking of their marker and then “unlock” their orientation by looking away from the marker. This technique creates a topology of overlapping spheres that the user can navigate. The experience would feel quite constricting however, as humans are used to moving and looking at the same time, and doing such would cause tracking artifacts: AR content bound to the marker or feature would appear off centered until the user required the marker.

    Scene Mapping


    As an alternative to, or in conjunction with performing tracking via a head mounted sensor a user’s position and orientation can be mapped by static sensors in a room, as is implemented by Valve and HTC’s Vive. In this system the user has freedom of orientation and freedom of movement, provided that they remain within the sensor boundaries. This creates a potentially unpleasant user experience: unlike in the case of IMU or camera tracking the user belives that they have freedom of movement and will meet an abrupt loss of tracking when they cross the sensor boundary.

    This limitation defines the type of content that can be used with scene mapping. For example a video game where the user was walking in an open area would create a suboptimal user experience: users would constantly be running into the sensor boundary. In contrast a virtualized workspace, attaching AR content to physical content in room could provide a more immersive simulation.

{"cards":[{"_id":"67c0e860bfc592cec7000069","treeId":"67c0e6fbbfc592cec7000066","seq":7423881,"position":1,"parentId":null,"content":"Towards a Domain Independent Framework for Input Device Testing and Comparison in Gaming\n\nClark Rinker\nWestern Washington University Computer Science\nrinkerc@students.wwu.edu\n\n"},{"_id":"67c0e8eebfc592cec700006a","treeId":"67c0e6fbbfc592cec7000066","seq":7423886,"position":1,"parentId":"67c0e860bfc592cec7000069","content":"# Abstract\n\nAugmented and virtual reality hardware has not reached their true potential due to our limited understanding about user’s experience surrounding these [1](http://dx.doi.org/10.1109/ISMAR.2008.4637362). To bridge this gap, we propose a framework for examining input devices for use with augmented and virtual reality hardware. Our contribution is in three parts. First, we provide a categorization of the types of user interfaces and content that one can encounter in virtual and augmented applications. Second, we propose the design of an experimental software testbed for evaluation of an input device’s potential for use in virtual and augmented reality applications. Finally, we propose an experimental protocol, which was informally tested using a pilot study.\n"},{"_id":"67c0ea42bfc592cec700006b","treeId":"67c0e6fbbfc592cec7000066","seq":7423896,"position":2,"parentId":"67c0e860bfc592cec7000069","content":"# Introduction\n\nWhile a variety of commercial input devices are available to users today, none are as predominant as the keyboard and mouse on personal computers[2](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.158.1947&rep=rep1&type=pdf), gamepads on home consoles, and touchscreens on mobile devices[3](http://link.springer.com/chapter/10.1007/978-1-4419-9845-3_11). These input devices have emerged as the most effective way to interact with their respective platforms regarding common task such as navigation, text input, and gaming. As consumer virtual and augmented reality devices become available in the imminent future, users will face the challenging task of familiarizing themselves interacting with depth in immersive 3d experiences. However, little is known about the efficacy of these input devices that will require users to make a transition from 2D screens to 3D headsets. The novel interactions may or may not be embraced by the users due to unfamiliarity of interaction in a 3D environment. Alternately, we may discover that these alternative input methods [4](http://dx.doi.org/10.1145/1166253.1166261), such as remotes or gesture detection, can prove more effective than currently available ubiquitous methods, or perhaps that entirely new input devices will need to be developed to satisfy user needs.\n\nIn our research we aim to learn about users experience with augmented reality headsets. By conducting an informal pilot study, we discovered that traditional game engine IDEs such as Unity make it difficult to interchange input devices for the purpose of testing augmented content. Too often these systems are designed for abstracting input across personal computers, consoles, and mobile devices, therefore rendering them inadequate for use with non-standard input devices. This inadequacy led to the development of a software testbed which can map a variety of input devices to an action, render 2d, 3d, virtual and augmented content, among other uncommon features. With a proper testbed we can empirically study the relative strengths and weaknesses of the available input devices for virtual and augmented reality environment.\n\nIn addition, to learn about user experience with these input devices, we developed an experimental protocol focusing on understanding user’s comfort and aptitude with an input device while performing gamified tasks. These tasks reflect the types of interactions a user may have in 2d, 3d, virtual and augmented environment. We conducted a preliminary study with a small group of users ranging from proficient to novice in various computer and gaming tasks. Findings from this pilot study will help us to prepare for an in-depth study with larger sample size, which will enable us to guide the design of next generation interaction technique. \n"},{"_id":"67c0eb7dbfc592cec700006c","treeId":"67c0e6fbbfc592cec7000066","seq":7328910,"position":3,"parentId":"67c0e860bfc592cec7000069","content":"\n# Background\nWhile a wealth of work has been created since [Steve Mann's](http://wearcam.org/contributions.pdf) seminal contribution to wearable computing the advances in user input, sensor fidelity, rendering technology, and content generation has placed the technology on the cusp of consumer adoption. Due to the tremendous technological undertaking of creating the latter there has yet to be a dominant new user input device for augmented reality. Understand these other technologies in integral for designing an AR input device."},{"_id":"67c0fd43bfc592cec7000072","treeId":"67c0e6fbbfc592cec7000066","seq":7423897,"position":0.25,"parentId":"67c0eb7dbfc592cec700006c","content":"## Content Classification\nWe describe four basic categories of experiences, dealing with the type of content, and the way it is experienced by the user[5](http://sonify.psych.gatech.edu/~ben/references/card_a_morphological_analysis_of_the_design_space_of_input_devices.pdf). We describe their general definitions below.\n\nClassic 2d experience focuses on two dimensional content - images and text - displayed on a flat surface like a computer screen. Examples include window managers, terminals, or 2d games.\nClassic 3d experience focuses on three dimensional content - meshes and interactive worlds - displayed on a flat surface, such as a computer screen. Examples include 3d games, interactive globes, and CAD (Computer Aided Design) systems. Interaction is often necessary for experiencing the 3d content.\n2d content displayed using a stereoscopic system can be place as a virtual screen. Images can be viewed from the sides or from behind. Examples include HUDs (Heads up Displays) and virtual screens.\n3d content displayed using a stereoscopic system can become immersive. For example, immersive games, telepresence (viewing or projecting), and interactive virtual models."},{"_id":"68478e40d3484f845100007a","treeId":"67c0e6fbbfc592cec7000066","seq":7422741,"position":1,"parentId":"67c0fd43bfc592cec7000072","content":"# Images\n## 2D Content, 2D Experience\n![](https://i.imgur.com/JDVyV6s.png)\n## 3D Content, 2D Experience\n![](https://i.imgur.com/fJarioM.png)\n## 2D Content, 3D Experience\n![](https://i.imgur.com/Py5D2Lj.png)\n## 3D Content, 3D Experience\n![](https://i.imgur.com/P8J6WAi.png)\n## Asset Sources\n* https://upload.wikimedia.org/wikipedia/commons/7/71/LightningVolt_Wood_Floor.jpg\n* http://tf3dm.com/3d-model/mario-and-luigi-56237.html"},{"_id":"67c12fe2822545379e000042","treeId":"67c0e6fbbfc592cec7000066","seq":7330254,"position":0.5,"parentId":"67c0eb7dbfc592cec700006c","content":"# Sensor classification.\nWith the exception of trivial HUD AR experiences ala Google Glass sensor telemetry is integral to augmented reality on two fronts. Firstly precise measurements of the position and orientation of the head mounted display must be reconciled with the position and orientation of a virtual camera in a 3D rendered in order for augmented content to appear accurate when overlaid on the real world. Conversely the position and orientation of physical objects within a scene must be known in order to provide context to augmented content. Finally the degree of freedom in which a user can navigate an augmented world stipulates the types of interactions a user can experience. DOF in AR is inherently bounded by sensor technology.\n\n"},{"_id":"67c35c6d822545379e000046","treeId":"67c0e6fbbfc592cec7000066","seq":7330258,"position":0.5,"parentId":"67c12fe2822545379e000042","content":"# Sensor criteria\n\n## Sensor Fidelity and precision. (Error rate)\nAs an AR simulation runs continuously sensors must be resistant to error accumulation. [1](https://developer.oculus.com/blog/magnetometer/) For example the Inertial Measurement Unit in a mobile device is quite effective at detecting rotation events (switching the device from portrait to landscape mode,) but prolonged use of the accelerometer (even over one second interval!) would lead to an inaccurate measurement of position. Algorithmic compensation for error rate is the domain of Signal Processing, employing techniques such as a [Kalman Filter](http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf)\n\nSensor fidelity is governed by sampling rate and resolution. Ideal sampling rate would be twice the rendering rate of the simulation (~6-7ms at 60 FPS)as to always have a new sample for each frame. Ideal sensor resolution would be a sub-pixel fidelity at a given rendering distance as to avoid visual artifacts. As examples IMUs that sample at 120hz are sufficient for current rendering technology (pre submillisecond render technologies like DX12/Vulkan) whereas MIT's [Chronos](https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-vasisht.pdf) which uses Wifi time of flight to measure the location of a mobile device to 4cm would produce visual artifacts when rendering short range content.\n\n\n## \"Relative\" vs \"global\" positioning\nSensors can be classified by whether they give measurement in reference to themselves, to their scene, or to the set of all scenes. For example an IMU may give the local rotation of a head mounted device, a beacon may give location relative to a remapped room, and GPS would give position relative to the earth (where all [successful](http://www.forbes.com/sites/paullamkin/2015/06/29/microsoft-hololens-destroyed-in-spacex-launch-explosion/#30c2c45f691a) AR simulations have taken place.) How these measurements are interpreted and combined in the simulation layer affects the types of AR content which can be created. \n"},{"_id":"67c35e25822545379e000047","treeId":"67c0e6fbbfc592cec7000066","seq":7423943,"position":0.75,"parentId":"67c12fe2822545379e000042","content":"# Sensor survey:\nSensors in augmented reality are either mounted on device, giving position/orientation relative to scene features, or scene mounted, providing position of the HMD relative to fixed room features. Both techniques have advantages and drawbacks, and composing sensors will lead to a more immersive AR experiences. The following table is an extensive, though not exhaustive list of sensors for use in augmented reality.\n<table>\n<thead>\n<tr>\n <th>Sensor</th>\n <th>Location</th>\n <th>Pros</th>\n <th>Cons</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n <th>Inertial Measurement Unit</th>\n <th>Head Mounted</th>\n <th>Accurate Orientation[r](http://www.x-io.co.uk/res/doc/madgwick_internal_report.pdf)</th>\n <th>Inaccurate Position[r](https://www.pnicorp.com/wp-content/uploads/Accurate-PositionTracking-Using-IMUs.pdf)</th>\n</tr>\n<tr>\n <th>RGB Camera</th>\n <th>Head Mounted</th>\n <th>Accurate Orientation, Position[r](http://docs.opencv.org/3.1.0/d5/dae/tutorial_aruco_detection.html#gsc.tab=0)</th>\n <th>Tracking locked to marker/feature visibility. Precision scales computation quadratically</th>\n</tr>\n<tr>\n <th>Depth Camera</th>\n <th>Head Mounted</th>\n <th>Feature detection: surface extraction</th>\n <th>Range Limitations</th>\n</tr>\n<tr>\n <th>Lidar</th>\n <th>Head Mounted</th>\n <th>Feature detection: surface extraction</th>\n <th>High Cost[r](http://articles.sae.org/13899/)</th>\n</tr>\n<tr>\n <th>IR Pylons</th>\n <th>Scene Mounted</th>\n <th>Accurate position</th>\n <th>Limited movement[r](http://media.steampowered.com/apps/steamvr/vr_setup.pdf)</th>\n</tr>\n\n</tbody>\n</table>"},{"_id":"67c344dd822545379e000045","treeId":"67c0e6fbbfc592cec7000066","seq":7423743,"position":1,"parentId":"67c12fe2822545379e000042","content":"# Degrees of freedom from sensors\nWhen considering user experiences in Augmented Reality it is important the degree of freedom with which a user can explore the augmented scene. A perfect system would of course allow augmented content to be placed anywhere and viewed from any angle that physical content could exist, creating a perfectly immersive experience. While obviously impossible the optimal system sets the criteria by which users will judge augmented experiences.\n\n## No Sensors\nWith no position and no orientation of our virtual camera we can only create augmented HUDs, like those used in [Google Glass.](https://developers.google.com/glass/develop/gdk/live-cards) While this allows the user to keep their vision focused on their task while viewing context information HUDs provide only the tip of the iceberg for Augmented Reality. It is important also to note that there is experimental evidence that HUDs can actually [hinder](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130611) one's ability to focus on tasks such as piloting a vehicle.\n\n## Camera Tracking\n![](https://i.imgur.com/iOU34uw.png)\nWith the exception of specialized devices like the Google Glass or Daqari's [Smart Helmet](http://daqri.com/home/product/daqri-smart-helmet/) the vast majority of deployed AR applications use smartphones with a monocular camera. These applications work by superimposing 3D content over a marker or feature detected by the camera. With camera marker tracking a user's position is limited to the half sphere in which the camera can see the marker. This sphere's radius is the minimum distance with which the camera can detect the marker or feature, measured with a 480p camera and AR Toolkit to be between [1-2.5 meters](http://www.tinmith.net/papers/malbezin-art-2002.pdf). Increasing the resolution of the camera can increase the radius of this half sphere, at the cost of quadratically growing the cost of computation.\n\n## IMUs\n![](https://i.imgur.com/aXqSzud.png)\nAs discussed earlier Inertial Measurement Units can accurately measure orientation but not position. This technique has been used by the original Oculus development kits and mobile VR experiences like Google Cardboard. IMUs provide the inverse of the camera tracking experience: the user's position is locked to the center of a sphere. Their orientation is unlocked, allowing them to examine any point on the inside of the sphere. 360 video or VR movie theater applications effectively use IMUs for their experience, however the inability to move limit's the experience's immersion.\n\nCombining IMUs and camera tracking provides an interesting hybrid case of Augmented Reality. A user in this system can translate their position by maintaining camera tracking of their marker and then \"unlock\" their orientation by looking away from the marker. This technique creates a topology of overlapping spheres that the user can navigate. The experience would feel quite constricting however, as humans are used to moving and looking at the same time, and doing such would cause tracking artifacts: AR content bound to the marker or feature would appear off centered until the user required the marker.\n\n## Scene Mapping\n![](https://www.wareable.com/media/images/2016/02/htc-vive-manual-1454720231-dLDl-column-width-inline.jpg)\nAs an alternative to, or in conjunction with performing tracking via a head mounted sensor a user's position and orientation can be mapped by static sensors in a room, as is implemented by Valve and HTC's Vive. In this system the user has freedom of orientation and freedom of movement, provided that they remain within the sensor boundaries. This creates a potentially unpleasant user experience: unlike in the case of IMU or camera tracking the user belives that they have freedom of movement and will meet an abrupt loss of tracking when they cross the sensor boundary. \n\nThis limitation defines the type of content that can be used with scene mapping. For example a video game where the user was walking in an open area would create a suboptimal user experience: users would constantly be running into the sensor boundary. In contrast a virtualized workspace, attaching AR content to physical content in room could provide a more immersive simulation."},{"_id":"67c0eca5bfc592cec700006e","treeId":"67c0e6fbbfc592cec7000066","seq":7328485,"position":4,"parentId":"67c0e860bfc592cec7000069","content":"# Pilot Study\n\nOut pilot study consisted of 11 persons (3 female) testing a range of input devices across multiple domains to encapsulate the above mentioned categories. All participants were asked to complete a testset of game skills using keyboard and mouse, the Xbox Gamepad, the Steam Controller, and the Leap Motion, our Augmented Reality standin device. Our testset included a blockworld navigated in the first and third person, a racing simulator, a fighter plane simulator, and an on rails flying game in the vein of the Nintendo classic Starfox. Participants were asked to complete the four games with each of the input device, answering a semi-structured questionnaire about their experience.\n\nFor each device we asked users questions regarding the “dimensionality” of the input device. For instance a D-Pad would be effective for playing a racing game but would be ineffective for piloting a plane, where roll, pitch, and yaw require an additional axis of input to control. Users were asked about their ability to navigate their avatar within the test game they were playing but also how effective they believed the device would generalize to other tasks, such as text input or menu navigation. Finally we asked users to rate the haptic feedback of the device, including the aesthetic feel of the controller (using a control stick vs using a mouse) and servo driven vibration or resistance (“Rumble”) they noticed. The question of haptics was of particular importance to us as we postulate a lack of touch feedback to be a barrier in creating immersive AR experiences. To provide differential feedback between participants we also collected data on participants socioeconomic status, age, and experience with gaming."},{"_id":"67c0ed25bfc592cec700006f","treeId":"67c0e6fbbfc592cec7000066","seq":7328487,"position":5,"parentId":"67c0e860bfc592cec7000069","content":"# Results\nMatching demographic expectations, men were more likely to respond yes to “I am a gamer,” or “I am relatively familiar” when asked about their gaming experience. Similarly, men responded as being more proficient using an Xbox Controller or Keyboard and mouse than their female counterparts. More interesting results were found when using the Steam Controller and leap motion. While the Steam Controller resembles a traditional gamepad, it’s haptic touchpads instead of dual control sticks proved foreign for the more avid gamers, and female users performed better than their male counterparts. Even more interesting were our observations of the participants use of the Leap Motion controller: without the traditional haptic feedback of the controllers, the more experienced “gamer” identifying men found using the LEAP motion device more frustrating and performed worse at the on rails piloting game, taking longer to complete it successfully and achieving a lower score. \n\nWe postulate from our pilot study that the transition to using next generation input devices on virtual and augmented reality headsets will be more challenging for those with prior gaming experience than for those without. As these new devices will rely on more natural input methods such as hand gestures, skilled controller users (with traditional input devices) will have to unlearn the mental mappings they have created to be proficient with earlier devices. Conversely, those who spent less time focusing on mastering game controllers will find it easier to transition into using these next generation input devices."},{"_id":"67c0edd5bfc592cec7000070","treeId":"67c0e6fbbfc592cec7000066","seq":7328489,"position":6,"parentId":"67c0e860bfc592cec7000069","content":"# Future Work\n\nWhile our pilot study offered insight into how users with varying proficiencies interact with familiar and foreign devices, an in-depth study is necessary to get a better understand of how demographics and personal experiences affect one’s ability to master the new generation input devices. Additionally, while collecting data for our study, we were stymied by the brittleness of different game engine input, which made it extremely challenging to generalize across console, PC, and mobile input. Currently, we are working on an input testbed that will aid in collecting data about a variety of input devices that can be used to examine user experience with traditional and novel input devices."}],"tree":{"_id":"67c0e6fbbfc592cec7000066","name":"Thesis","publicUrl":"clarkrinker_thesis"}}