This post follows: Part 11: Using an Xbox One Controller with Unity on Windows 10
Putting the pieces together, the CameraRig control script, Xbox controller input and scene manager — we setup a new scene to test out complex (real) 3D movement. Let’s do a space module docking example to appreciate the complexity of 3D maneuvering without the hand-waving simplification we see on tv, movies and games.
In space… we have limited options for orientation, we don’t get a compass direction or the horizon so we substitute a reference point and plane. We can simplify movement a little by not strictly following Newton’s 3rd Law and slow down movement automatically, we’ll just pretend to have some sophisticated navigation computer (but still need to manually maneuver instead of just clicking our destination)
We use the dual sticks, triggers and bumpers of the controller to control movement and turns or our POD in 3D to hit a specific docking location. The controller is configured like most setups for flight simulators or drones.
Left Stick |
Lateral Movement (Left, Right, Up, Down) |
Right Stick |
Pitch, Roll Rotation |
Bumpers |
Yaw (Rudder) Rotation |
Triggers |
Forward / Back Movement |
View |
Switch Between Camera and POD Controls |
Managing movement in 3D on a 2D monitor is tricky, in the example I switch between Camera and POD control to verify the actual position and alignments. One solution is to have secondary maps or displays to provide another visual position indicator; this works out for 2D display monitors but not necessarily for VR and Mixed Reality Headgear where having a secondary display or mini-maps are tricky, (we’re creating another Head-Up Display [HUD] on our head mounted gear and play Iron Man) — especially if it’s only a 2D render since we’re already in an immersive 3D view.
The sample scene setup is from a Third Person view instead of a First Person cockpit to emphasize movement in 3D space. A system like HoloLens or Project Tango minimizes camera / object switching since we can control the camera by physically moving around the object to get a different view of the scene; the Oculus Rift with the longer tether cable or Gear VR an can also work out but it requires moving in a relatively empty physical space to avoid bumping or tripping over the real world.
To setup the scene, create 2 GameObjects, the POD and the MOTHERSHIP. I started with a cylinder and then scaled the front to a cone shape, so I can test whether I can slide into position when coming in misaligned. To have a clean fit, I duplicated the cone section and extruded it to become the station. I extended the sides to easily see the rotation. The object shapes are relatively simple and readily created with your choice of 3D modelling software.
In the import setup of the 3D Objects to Unity, enable the “Generate Colliders” option. The colliders are used along with Unity RigidBody to handle the movement Physics of the scene. Configure the POD’s RigidBody with Mass=10, Drag=0.1 and Angular Drag=0.2 to make maneuvering slightly easier instead of having to apply counter-force for every action.
We’ll need to create a new POD control script and scene manager, make adjustments to the existing CameraRig.cs to handle the controller input. I’ll discuss the changes in later posts.
For questions, comments or contact – Twitter @rlozada