Multitouch is a ubiquitous input technique, used primarily in mobile devices. Larger multitouch displays have been mostly limited to tabletop research projects, but hardware manufacturers are also integrating multitouch into desktop monitors. Multitouch input has several differences from mouse and keyboard input that make it a promising input technique. While the mouse is an indirect and primarily unimanual input device, multitouch often supports direct-touch input and encourages bimanual interaction. Multitouch also supports all ten fingers as input, providing many more input degrees of freedom than the mouse. Building multitouch applications first requires understanding these differences. We present a pair of user studies that contribute to the understanding of the benefits of direct-touch and bimanual input afforded by multitouch. We then discuss how we leverage these benefits to create multitouch gestures for a professional content-creation application. Lastly, we present a declarative multitouch framework that helps developers build and manage gestures. In our first study, users select multiple targets with a mouse, one finger, two fingers (one from each hand), and any number of fingers. The fastest multitouch interaction is about twice as fast as the mouse for selection. The direct-touch nature of multitouch accounts for 83% of the reduction in selection time. Bimanual interaction accounts for the remaining reduction. To further investigate bimanual interaction for making directional motions, we examine two-handed marking menus, bimanual techniques in which users make directional strokes to select menu items. With training, we find making strokes bimanually outperforms making strokes serially by 10-15%. Our user studies demonstrate that users benefit from multitouch input. However, little work has been done to determine how to design applications that leverage these benefits for professional content-creation tasks. We investigate using multitouch input for a professional-level task at Pixar Animation Studios. We present Eden, a multitouch application for building virtual organic sets for computer-animated films. The experience of two set construction artists suggests that Eden outperforms Maya, a mouse and keyboard system currently used by artists. We present design guidelines that enabled us to create gestures that are easy for artists to remember and perform. Eden demonstrates the viability of multitouch for improving real user workflows. However, multitouch applications are challenging to implement. Using current frameworks developers must meticulously track the proper sequence of touch events from multiple temporally overlapping touch streams using disparate event-handling callbacks. In addition, multiple gestures often begin with the same touch event sequence leading to gesture conflicts in which user input is ambiguous. Thus, developers must perform heavy runtime testing to detect and resolve conflicts. We simplify multitouch gesture creation and management with Proton, a framework that allows developers to declaratively specify a gesture as a regular expression of customizable touch event symbols. Proton provides automatic gesture matching and static analysis of gesture conflicts. We also introduce gesture tablature, a graphical notation that concisely describes the sequencing of interleaved touch events. Finally, we present a user study that indicates that users can read and interpret gesture tablature over four times faster than event-handling pseudocode. Multitouch applications require new design principles and tools for development. This dissertation addresses the challenges of designing gestures and interfaces that benefit from multitouch input and presents tools to help developers build and recognize multitouch gestures. This work serves to facilitate a wider adoption of multitouch interfaces. We conclude with several research directions for continuing the investigation of multitouch input.




Download Full History