This post follows: Part 8: Creating a Gaze based Input Module for Unity
The gaze input module uses the screen display center as the detection point tied to the user camera view. In a virtual reality platform this works out because the scene camera is directly controlled by the device to match the user head movement and effective view of the world; this setup is fine if the environment limits the user to a fixed position or track (railroad setup) but introduces challenges for user movement in an open world with the limited tracking area of VR systems and limited physical user movement due to being tethered, visually obstructed or even lack of movement space.
The typical approach for user movement in first person games and apps is to separate camera movement from camera view controls; however in a VR configured app, we usually cannot modify the position and rotation of the camera object as it is controlled directly by the VR system. The recommended solution for separate control of the camera is to parent the scene camera to a separate GameObject that can be controlled by script, this game object is usually configured as the CameraRig.
The long term plan is to use a game controller as it provides multiple axes for movement and view along with several action buttons. Alternative approaches for the “gaze and tap” utilizes the gaze and a simpler control device with buttons and sometimes with directional control such as those used for selfie sticks, media remote controls, Gear VR touchpad and the Oculus Remote. Let’s assume (and pretend) these input devices are Unity accessible either as joystick axes, buttons, mouse or keyboard. One very convenient capability of Unity is we are able to consolidate input to buttons and axes through Input configuration. I’ll setup dual-stick behavior with the keyboard, mouse and Xbox One controller.
On the Main Menu Bar, click “Edit / Project Settings / Input” to access the InputManager and view the default configurations.
Notice some of the entries are identical but have different definitions, this allows the script to use the same reference but utilize different input setup such as “Horizontal” input using keyboard keys “A”, “D”, “LEFT”, “RIGHT” and joystick X-axis.
In the current setup, the usual WASD and Arrow Keys map to “Horizontal” and “Vertical” input; to map to a multi-axis game controller we can map WASD keys to the Left-Stick and the Arrow Keys to the Right-Stick. Following First-Person-Shooter conventions, WASD/Left-Stick stay on as Horizontal and Vertical (which is really forward-back movement) and the Arrow Keys/Right-Stick become “HorizontalTurn” and “VerticalTurn”
Using dual-stick controls works well in a 2D app but VR operates in 3D space so movement can get tricky without intuitive 3rd dimension control especially for open space movement. Since we’re using a free movement camera, we can introduce “Altitude” controls to change the camera height without having to first look up or down (in the example Q and E keys are used)
We’re currently using keyboard controls to explain the concept, using the same configuration we can assign the game controller input later by using the corresponding axis and button id’s that match the Xbox One controller or similar Input devices.
There are multiple ways to edit the InputManager entries:
- Change the size to add additional entries and edit the new items
- Usual Unity CTRL-D to duplicate entries and edit values
- Assuming Asset Serialization is set to “Force Text” in Editor Settings, the /ProjectSettings/InputManager.asset can be edited (carefully) to add/modify entries since it’s just a name value list. I like to group and organize the entries so I usually edit the asset file so I can put all controller, button, mouse entries together.
On the next post, I’ll discuss the CameraRig control script.
For questions, comments or contact – follow me on Twitter @rlozadaFollow @rlozada