Designing Type In Space for HoloLens 2

Previous Project Stories

Here are some of my previous stories about type in mixed reality.

Background

As a designer who loves type design and typography, I have been falling in love with holographic type since I met HoloLens back in 2015. Microsoft HoloLens allows you to place and see holographic objects in your physical environment. Just like real-world physical objects, you can place it on a table or wall, you can move around and see it from different angles.

Type In Space for HoloLens 1st gen (2016–2018)

HoloLens 2’s Instinctual Interactions

MRTK: Building blocks for spatial interactions and UI

MRTK(Mixed Reality Toolkit) v2

UX Elements: Text Object

Text is the most important component in the Type In Space app. The text object is composed of the following elements.

Text Mesh Pro

Since HoloLens has a high-resolution display with 47 PPD(Pixels Per Degree), it can display sharp and beautiful text. To properly leverage this high-resolution, it is important to use a properly optimized text component. Unity’s TextMesh Pro uses SDF(Signed Distance Field) technique to display sharp and clear text regardless of the distance. For more details, see Typography guideline and Text in Unity on Mixed Reality Dev Center.

Near & Far Manipulation

Being able to directly grab and manipulate the holographic type is one of the most important core interactions. MRTK’s ManipulationHandler script allows you to achieve one or two-handed direct manipulation.

Direct two-handed manipulation with MRTK’s Manipulation Handler

Bounding Box

The bounding box is a standard interface for the precise scale and rotation of an object in HoloLens. For Type In Space app, I used it to indicate currently selected text object by displaying the corner handles. MRTK’s Bounding Box provides various configurable options for the visual representation of the handles as well as the behaviors.

Bounding Box in HoloLens 2

UX Elements: Menu UI for Text Properties

Button

The button is one of the most foundational UI components. In HoloLens 2, you can directly press buttons with hand-tracking input. However, since you are essentially pressing through the air without any physical tactile feedback, it is important to amplify visual and audio feedback.

MRTK’s HoloLens 2 button
MRTK’s HoloLens 2 button provides various types of visual feedback

Hand Menu

In the original version, I had a floating menu with tag-along behavior. It followed the user so that it can be accessed anytime. In HoloLens 2, there is an emerging pattern called ‘hand menu’. It uses hand tracking to display quick menus around the hands. This is very useful to display a contextual menu when it is needed then hide and continue to interact with the target content. To learn more about the hand menu, see Hand Menu design guidelines on Mixed Reality Dev Center.

Hand Menu for the text properties
Hand Menu Explorations — Pin / Grab & Pull to world-lock the menu
Focal depth switching between the target object and the menu causes the eye strain

In-Between Menu

My solution was to place the text property menus between the target object and my eyes(headset). Of course, closer to the target text object, since the goal is to minimize the focus/depth switching. After playing with the values, I made the menu placed 30% far from the target object. (70% far from my eyes) This allowed me to directly interact with the menus easily with the text objects in the near-field. This menu positioning/sizing also works well with HoloLens 1st gen’s smaller FOV size.

Minimized focal depth switching between the target object and the menu. The menu automatically scales up/down to maintain the target size based on the distance.
The menu automatically scales up/down based on the distance to maintain the constant size
In-Between menu works well with near interactions too

UX Elements: Main Menu for the global features

Main Menu
The main menu for the global features
  • New Text | Clear Scene | Save & Load Scene
  • Spatial Mapping | Mesh Visualization | Snap to Surface
  • Physics: Rigid Body | Kinematic | Slider UI for gravity(force)
  • Grab & Duplicate Toggle
  • Random Composition (similar to Holographic Type Sculpture)
  • About

UX Elements: Annotation

Being able to place text object in the physical space means that you can annotate physical objects. To help visual connection to the target object I have added an optional line and sphere component. By simply grabbing and moving the text object(anchor) and the sphere(pivot), you can easily create annotations with a connected line between the text and sphere. For this feature, I used MRTK’s Tooltip component.

Annotation Feature
Annotation on the physical objects

UX Elements: Spatial Mapping

Used Spatial Mapping + Surface Magnetism to align the text with the physical surface

UX Elements: Physics

With Spatial Mapping, I have physical surfaces that I can use. How about applying gravity force to make the type fall or fly and let them collide with the physical environment? I already had a simple ‘gravity’ option in the original version. In the new version, I have added a slider so that the user can control the amount of the force as well as the direction.

Physics options

UX Elements: Text Input with keyboard and speech

HoloLens 2’s direct interaction with holograms dramatically improved the text input experience too. Just like using the physical keyboard, you can use your fingers to type in the text with the holographic keyboard. Of course, you can still use the speech input for dictation. The system keyboard has a built-in speech dictation input button. MRTK provides great examples of using system keyboard and speech input.

Keyboard and speech input

UX Elements: Grab & Duplicate

The original version had a simple duplicate feature which allows you to quickly create a new text with the same text properties. To make the duplicated text visible, I made the duplicated text placed with some offset position. This created an interesting visual effect with an array of instances.

Grab & duplicate feature
Layout example

Supporting HoloLens 1st gen and Windows Mixed Reality VR devices

One of the benefits of using MRTK is cross-platform support. MRTK’s interaction building blocks and UI components support various types of input such as HoloLens 1’s GGV (Gaze, Gesture, Voice) and Windows Mixed Reality immersive headset’s motion controller.

Type In Space on HoloLens 1st gen with Gaze & Air-tap
Type In Space on Windows Mixed Reality VR headset with motion controller input

Screen recordings on YouTube

I have recorded the process of the building core UX elements using Unity and MRTK.

Microsoft Mixed Reality Dev Days 2020 session recordings — MRTK’s UX Building Blocks

Resources

Twitter

Read my other stories