Touch, gesture, and direct manipulation in Microsoft Surface experiences move away from discrete actions toward continuous action. In GUI applications, discrete actions are mostly brief, single-click actions that users perform in a sequence to complete a task. For example, to move an object from one location to another, a user would select the object, select the appropriate command, and then move the object.
In contrast, direct manipulation favors continuous actions. To move an object from one location to another, the user can just grab it and move it to its new location, as shown in the following illustration.
Must
|
- Give every touch an immediate visual response, even if the ultimate result of the input takes time to compute. A pre-made response is better than a delayed custom response.
- Remember that size matters. In GUI applications, the position of the mouse is represented as a single point on the screen. When fingers and objects are input devices, you must properly size interactive elements to accommodate these input methods, and you must position them so a user’s hand, arm, or input object does not block relevant content around an interactive element.
|
Should
|
- Position immediate visual responses at the point of contact so that every touch appears to cause a direct, physical reaction.
- Use many fingers, many users, and many objects by ensuring that your application responds well to many forms of simultaneous input. Use the social principle and make sure that virtual and physical objects blend seamlessly.
- Use affordances for contact types and methods. Microsoft Surface can distinguish between tags, fingers, and large areas of contact (blobs), but it cannot identify which fingers are on the same hand or user, and it cannot identify different types of blobs (hands versus arms versus untagged objects). Provide visual affordances and constraints to tell users how and where they should touch the Surface screen.
- Design for accidental activations so that users can see and undo actions when they touch the screen unintentionally. Accidental activations typically occur with conversational gestures, when draping clothing touches the screen, and when users rest their arms on a Microsoft Surface unit.
Note: A conversational gesture is a movement that one user uses to explain or articulate a concept to another user. This type of gesture typically occurs when one user points to an object on the Surface screen, which causes an accidental activation of that object.
|
Could
|
- Extend direct manipulation by enabling user input in one area to cause a change in another part of the display. To create this type of manipulation, use super-realism to separate cause and effect, and use the principle of scaffolding. However, you must also include immediate, in-place feedback, and make clear to all users the connection between cause and effect.
|