- When designing a UI for a SmartEyeglass app, you must make sure not to distract the user.
- The UI structure dictates particular choices and limits for fonts, icons, and offering selectable choices.
Where to go next
It is recommended that you follow these guidelines when designing your SmartEyeglass application to ensure that users have a safe and consistent experience when using SmartEyeglass.
When choosing how to display information on SmartEyeglass, you must consider the unique nature of this device. The user wears it and the display appears overlaid on the real world; the user is not choosing to look down at it.
SmartEyeglass uses a hierarchical graphical user interface for its applications. The top layer is a horizontally-scrolling menu that allows users to select applications.
Each item in the top layer is called a card. When a user taps the top-layer card for your application, the action automatically runs your extension.
This figure shows how the cards that start the Home, Camera, and Twitter applications are displayed horizontally adjacent to each other, and the screens that are defined for each application are on subsequent layers.
Your application UI is expected to have this layered structure. When the user starts your app by selecting the top-layer card with a tap on the controller, the display shifts vertically down one layer to the screen that your app defines.
When designing your application UI, keep these design rules in mind:
- Avoid too many layers. Complex hierarchies require more steps and demand more mental effort to understand. We recommend limiting the depth to 4 or 5 layers.
- Adhere to the hierarchy. Applications that violate the layer hierarchy are confusing. Make sure your users always know where they are in your application and in the device environment.
Designing the entry card
You can implement the top-layer card for your application using the
WidgetExtension class in the SmartEyeglass SDK. For details, see the Widgets guide. You can choose the icon, the title, and additional text to represent your application. For example, an SMS application might display the image associated with the account rather than an application icon, and show the account name rather than the application title.
Designing app layer displays
Once the user has selected the card that invokes your application, the screens that make up your application UI reside in the second and subsequent layers. You should design a set of screens to suit your application. You can use XML-based layouts or bitmaps to define the display, and also provide option menus. For details of how to define and display screens, see the User interface guide.
Users can shift up and down between layers of the application with a tap on the touch sensor or by pressing the back button. Swiping left or right on the touch sensor scrolls horizontally through list items. When a textbox shows text that is too long for the display area, users can initiate a vertical scroll function. As a developer, you must implement the program logic to decide what each gesture means in the context of your application. See more about design guidelines for Scrolling.
Here are some examples of app-specific screens:
You should follow the general readability guidelines that are established for traditional mobile devices. For SmartEyeglass, we recommend these specific choices for displaying text, icons, and menus or selectable items.
SmartEyeglass has an 8-bit monochrome display of 419 x 138 pixels. Most application UIs can be designed to fit into this area. You can use the layer and card architecture to partition information into small chunks, and implement horizontal scrolling to display longer text or additional choices.
Text display guidelines
If you specify your display as a layout with a TextView element, the standard typeface for the device is used. You should specify an appropriate font size for the intended usage. The minimum recommended font size is 18 pixels.
Your font usage should take account of the user’s movement and environment.
- When you detect that the user is sitting down or standing still, it is safe to use a smaller font size. Not too small though– a font size smaller than 18 pixels is likely to be difficult for most users to read in any circumstances.
- When the user is moving around, you should use larger fonts that are easier to read. You should also display less text overall to avoid distraction and interfering with the user’s view of their surroundings.
It can be difficult, and possibly unsafe, for the user to read long blocks of text on the SmartEyeglass display. We recommend that you provide options that allow the user to choose whether to read longer messages on the host phone device.
Icon design guidelines
The minimum recommended icon size is 18×18 pixels, so that the icon can be easily recognized by most users. For users who are walking or moving around, we recommend an icon size of at least 52×52 pixels.
- Choose icons that are readily recognizable, so that most users will immediately know what they mean.
- Be aware that ambient light can make icons harder to recognize. Applications intended for outdoor use should make use of brighter and larger icons.
Selectable choice guidelines
When you need to present users with a set of choices, take account of the limited space available in the display. You can make the best use of the display width by limiting both the number of items to choose from, and the string length of each item.
Use simple phrases to ensure that items are easy to read and recognize. Also, try to phrase help text and choices so that they fit into the display width.
Adjusting brightness and depth
The clarity of your display depends, not only on your design choices, but also on the environment in which the application is used. Both the brightness of the display, and the apparent depth at which the display plane appears, are adjusted by the user; both typically need to be adjusted for different environments.
Screen brightness: The display brightness (luminance) that allows your user to see and understand your UI when they are indoors next to a table or wall, is not sufficient when they are outdoors with nothing around them, or near a bright source of light. For dark conditions, low to medium screen brightness is safer.
Image depth: Our eyes adjust to the angle formed by the line of sight between each eye and an object, called the angle of convergence. This automatic adjustment is what allows us to perceive an object as one thing instead of two. SmartEyeglass displays a separate image to each eye. When you adjust the angle of convergence for these images, you perceive the display as being nearer to or further from your eyes.
In an indoor environment, both text and images are easier to see when they appear closer. Outdoors, when there are lots of things in the background, the display is easier to see if it appears further away. See the Settings guide for details of how to modify the apparent depth programmatically.
SmartEyeglass input methods
The SmartEyeglass API defines intents for various types of user input operations that originate with the controller.
The touch sensor on the controller allows swipe, tap, and press-and-hold gestures. The left-right direction of the swipe gesture is encoded, but it is up to your application to interpret a swipe as a horizontal or vertical scroll according to the context. The tap gesture should execute a command in your UI.
User input gestures
These are the recommended operations that you should implement in response to user-input gestures:
|Key Assignment/Button Name||Recommended Operation|
|Swipe (left/right)||Switch the screen left or right, or move up or down the layers.|
|Tap||Execute a command.|
|Press and hold||Switch to a drag-to-scroll mode. Allows the user to swipe while holding, in order to scroll down through long text in a text box on a single screen.|
|Back button||Default action is to move up one layer in the UI. You can intercept this if your layering is more complicated, to make it go up to the next logical layer up within your app. For more information, see design guidelines for Scrolling.|
Hardware activation buttons
The Talk and Camera buttons activate the microphone and camera in the device. You cannot use these directly for user input, but you can interpret the data that is returned by these sensors when they are active. For more information on these functions, see the Voice-to-text input guide and Camera guide.
When designing your input style, keep in mind that voice recognition is generally less accurate in noisy environments, such as crowded or windy places. You should always supply an alternative input method, in case user speech is not recognized.