Handpad: a virtual mouse for controlling laptops in a smart home
© The Author(s) 2017
Received: 21 March 2017
Accepted: 19 July 2017
Published: 14 September 2017
This study investigates the possibilities of equipping fixed furniture with touch screens to control laptops or other smart home devices, to create a holistic smart home control system. Although multi-touch screens have recently been integrated into desktop use, most users prefer a classical mouse for desktop input. This study introduces Handpad, which is a new indirect touch input device designed to control laptops in a smart home. Handpad combines the mobility of a touch screen with the precision of a mouse by tracking whole-hand movement on the touch screen and moving the screen pointer accordingly. In a user test, participants showed a high level of acceptance and positive attitudes toward Handpad.
KeywordsHandpad Smart home Touch screen Input device Mouse
Multi-touch screens are becoming increasingly common in daily use, especially given the popularity of smart phones and tablets. However, the main usage of multi-touch screens is restricted to mobile applications. Meanwhile, users are now dealing with more smart devices in their homes. No single input method can satisfy users’ need to obtain a holistic solution for smart home control—it can even be difficult for users to find and distinguish between different input devices. In a smart home, indirect interaction modes can be used to create a holistic control system.
Are touch screens reserved for mobile usage? This study explores the possibility of adapting touch screen technology for domestic environments. This study provides a method for combining the capabilities of touch screen and mouse interaction in a smart home, demonstrating that touch screen usage is only limited by our imagination. Today, pointing devices are mainly adapted to specific scenarios: the touch screen for better mobility and the mouse for desktop use. Mouse interaction, with its two buttons and scroll wheel, remains the simplest and most efficient input for most desktop users. A mouse outperforms other devices in simplicity, precision, and traceability. Furthermore, the places in which we use desktops have evolved, reaching a variety of environments ranging from our offices to our homes. Can fixed furniture be equipped with touch screens to control a desktop?
This study explores the implementation of a virtual mouse that uses a touch screen to allow new methods of interaction while maintaining the simplicity and precision of using a mouse. This study also explores the application of touch screens in fixed usage. To achieve this aim, this study introduces Handpad, an indirect touch input designed for living room and mobile environments. We conduct a user test to evaluate Handpad’s performance and efficiency.
In the following sections, we discuss related work in touch screen input and enhanced mouse input, and compare the two.
Touch screen input
Previous research has shown that direct touch screens can enrich desktop interactions. However, users tire easily when interacting with vertical touch screens; interacting with a horizontal surface is faster and less exhausting. When touch screens are used as direct input devices, the finger hides the area of the screen underneath it. As a result, it is difficult to be as precise with a touch screen as one can be with a mouse. This issue is known as the occlusion problem. Some attempts have been made to overcome this issue. For example, Zimmermann et al.  enhanced the touch screen experience with haptic feedback for blind interaction.
More than a simple pointing device, a touch screen offers a large range of interaction possibilities. Schmidt et al.  proposed an indirect multi-touch approach. Damaraju et al.  introduced Multi-Tap Sliders, an innovative interface using a multi-touch screen to control different parameters in image editing software. In a smart home, there are multiple opportunities for touch screens to control devices. Obviously, mobile phones are often used to control smart home devices, and they can combine input display, output display, and remote control. However, in domestic spaces, the availability of a smart phone within a user’s reach has recently been reported to be as low as 46% . Moreover, it takes about 1 s to start interacting with a smart phone, and potentially another 4 s for a user to remove the smart phone from his or her pocket . Thus, it would be better if fixed furniture could be equipped with indirect touch screen input to create a holistic control system in a smart home.
Enhanced mouse input
Even though some laptops are equipped with a touchpad, most users prefer a classical mouse. Several attempts to increase the interactivity of the mouse can be found in the literature, including finger pressure sensitivity touch sensors [6, 7] and tactile and force feedback . These approaches slightly increase mouse-targeting speed. Additionally, there have been several attempts to use hand motion trackers to increase mouse interactivity. For example, Mistry and Maes  introduced Mouseless, which is based on IR sensors. The device lies at the edge of a computer screen or keyboard to track hand movements. The user’s finger shapes are recognized and their movements are translated as mouse clicks or movements. Shizuki and Tanaka  presented a malleable mouse pad that the mouse can sink into as an additional interaction. Yang et al.  presented a prototype of Magic Finger, an index finger with optical mouse sensors that can track finger movement on a surface and identify the surface touched. Lv et al.  used a 10.1-in. tablet to implement a soft keyboard and touch pointer as an indirect input device.
Comparison of touch screen and mouse
Researchers have compared the touch screen and mouse in single pointing, dragging, crossing, and steering. There are some benefits to using a touch screen instead of a mouse, but these benefits are slight [13–15]. Findlater et al.  compared a classical mouse and touch screen for the four operations stated above, finding that the touch screen outperformed the mouse by only 16% on average for younger adults (19–51 years old, M = 27.7, SD = 8.9) and by 35% for older adults (61–86 years old, M = 74.3, SD = 6.6). In addition to pointing efficiency, Leftheriotis and Chorianopoulos  noted that users rated touch screen input twice as highly as mouse input in terms of preference, speed, and entertainment. In the games industry, Watson et al.  showed that touch screen input outperformed mouse input in a 2D shooting game for both vertical and horizontal surfaces.
Even though the touch screen performs comparably or slightly better than the mouse, precision issues due to occlusion and the “fat finger problem” are a thorn in the side of the touch screen. Sambrooks and Wilkinson  compared mouse and touch screen input performance during a targeting experience, and found the rate of misses for the touch screen input to be twice as high as that of the mouse. Zabramski  compared the accuracy and speed of a mouse, pen, and touch screen in reproducing randomized shapes. The touch screen was the least accurate, but the fastest input method.
Although the touch screen offers more possibilities for interaction, the mouse remains dominant for use in desktops. This study introduces Handpad, bringing touch screens or other hand motion trackers to desktop work to enrich interaction and productivity.
Key design considerations
Stand-by posture Users usually do not move the mouse, but remain in a stand-by position. It is important to ensure that users can rest a hand on the touch surface as comfortably as with a mouse without triggering unwanted cursor operation.
Precision Users must be able to retain the mouse’s precision-movement ratio.
Accuracy Users must be able to move the cursor to its intended location as easily as with a mouse.
Ease of movement Touch screen use must require only as much strength to move the cursor as mouse use does.
Interaction Users must be able to perform mouse behaviors such as clicking, dragging and dropping, scrolling, and hovering as easily as with a mouse.
Tracking activation Users need to place five fingers on Handpad to begin interacting with it. This method allows the device to distinguish intentional cursor movement from an arm simply resting on the touch surface.
Finger tracking The position of all five fingers is tracked, but only four are used to directly move the cursor. To initiate cursor movement control, users first place one hand on the touch surface. The center of the four fingers (excluding the thumb) is used as the cursor’s relative position, as is the case when using a traditional mouse. The thumb is ignored in the calculation of the cursor’s position, allowing user more freedom of movement without triggering unwanted actions; however, the thumb is essential in the finger identification process.
Finger identification To distinguish the thumb from the other fingers, Handpad uses the method presented by Au and Tai . The center of the contact points is first computed. The angles formed by two consecutive contact points (in space) and the center is computed. The spanning angle, which is the sum of two adjacent angles of the same finger, is used to differentiate the thumb from other fingers, since the thumb has the largest spanning angle. Then, the remaining fingers are identified by their proximity to the thumb.
Click trigger To trigger a mouse down event, users lift one finger from the touch surface. From a mouse down event, to trigger a mouse up event, users replace the lifted finger on the touch surface, completing a click event. Although this click method may seem counterintuitive, since most users are accustomed to pressing a button to trigger a click, the lift-and-place concept is well suited to replace drag-and-drop.
Button mappings Each finger can be mapped a button. In this study, we map the index finger as the left-click button. A double finger touch gesture is used to trigger scrolling.
Handpad software architecture
The Android application serves as a Handpad client, detecting finger positions from the multi-touch tablet surface and sending these data to the computer.
The Windows computer plays the role of the Handpad server, receiving and processing finger position data from the tablet and controlling the computer’s onscreen cursor accordingly.
The receiver then searches for an Android device connected to the computer; if one is found, it triggers the Android Handpad client on the tablet. Communication between the two entities then begins and the user can move the onscreen pointer.
Handpad Android client
The Handpad Android client application detects and records users’ hand motions and transmits them to a computer. It is launched when communication between the Android device and personal computer has been established. The protocol used for this purpose is the Android Open Accessory Protocol 1.0, which has an accessory mode that allows Android devices to communicate and exchange data with another device through USB.
The procedure followed by the Android tablet and the Handpad Android client is as follows: the Android device is set to default USB connection mode when it is connected to a computer; the Android device is set to Android accessory mode when the Handpad Windows server finds the Android device; the Handpad Android client is then automatically launched and starts transmitting data to the connected computer. Once communication begins, the Handpad Android client transmits data continuously. The data transmitted to the Handpad Windows server is written in 128-byte words. The first 8 bytes are reserved for the number of points currently detected on the device surface. The remainder of the word consists of 8-byte pairs of x and y float coordinates of a detected point (if existing), coded in 4 bytes each.
Handpad Windows server
The Handpad Windows server, installed on a computer, aims to move the onscreen cursor according to users’ hand movements as detected by Android device and Handpad Android client. To establish communication with the Android device, the USB protocol is first used to detect a connected Android device to the computer.
The procedure followed by the Windows computer and the Handpad Windows server is as follows: the Handpad Windows server looks for an Android device through the USB protocol; once an Android device is connected, the server verifies its compatibility by determining the device’s accessory mode support; if compatible, the server then attempts to start the device in accessory mode; if the device supports Android Open Accessory Protocol, it establishes communication with the device; the mouse controller is then started, and the device can control the onscreen pointer.
Handpad mouse controller
Once communication between the Android device and Windows computer has been established, the Handpad Windows server receives user hand movement data from the Android device and processes them to produce corresponding onscreen cursor movements. The mouse controller is responsible for processing the received data and subsequently controlling the onscreen pointer.
Step 0: no fingers are detected;
Step 1: five fingers are detected and mouse control begins;
Step 2: fewer than three fingers are detected and mouse control ends.
Once State 4 is reached, mouse control begins. First, click gestures discrimination occurs. If a click gesture is detected, the click is then processed (States 6–8). Afterwards, cursor movement is processed (State 9).
To evaluate the proposed prototype, we conducted user tests in living room and bedroom environments, collecting evaluations from participants after using Handpad.
A total of 31 participants (14 male, 17 female) ranging in age from 18 to 30 years old, took part in the user study. Participants were recruited through social networking applications and via flyers distributed in several areas on campus. Each was given an incentive of 7 USD for their participation.
Participants needed to use Handpad to complete two tasks. Task A was browsing a film database and searching for films to watch. The participants were asked to browse through a film database website to search for films to watch. The participant was able to choose the film database website. During this task, the participant could read synopses, watch short trailers, and get more information about films. Task B was browsing one or several news websites to read news for 10 min. The participants were asked to browse one or several news websites and read news for 10 min. The participants were first asked which websites they preferred to browse (if any) and were then presented with a web browser with the desired web pages opened in various tabs (or windows, depending on user preference).
The graphical user interface used for both tasks is of a desktop OS, such as Microsoft Windows 7, with a web browser such as Mozilla Firefox.
Questionnaire constructs and items
P1: I found the system useful/useless in fulfilling the purpose of the activity
P2: Using the system enabled me to accomplish tasks more quickly/slowly
P3: Using the system increased/decreased my productivity
E1: My interaction with the system was clear and understandable
E2: It would be easy/difficult for me to become skilled in using the system
E3: I found the system easy/difficult to use
E4: Learning to operate the system was easy/difficult for me
Attitude toward using technology
Al: Using the system is a bad/good idea
A2: The system makes tasks more/less interesting
A3: Using the system is fun/boring
A4: I like/dislike using the system
S1: People who influence my behavior would be impressed/unimpressed if they saw me using the system
S2: People who are important to me would be impressed/unimpressed if they saw me using the system
S3: I would be impressed/unimpressed with someone if I saw him or her using the system
S4: I think/don’t think people who use the system have more prestige than those who do not
F1: I have/don’t have the resources necessary to use the system
F2: I have/don’t have the knowledge necessary to use the system
F3: The system is/is not compatible with other systems I use
I could complete a job or task using this system..
SE1: if there were no one around to tell me what to do as I go
SE2: if I could call someone for help if I got stuck
SE3: if I had a lot of time to complete the task for which the software was provided
SE4: if I had only the built-in help facility for assistance
ANX1: I feel apprehensive/fearless about using the system
ANX2: It scares me to think that I could lose a lot of information using the system by hitting the wrong key
ANX3: I hesitate to use the system for fear of making mistakes I cannot correct
ANX4: The system is somewhat intimidating/encouraging to me
BI1: I would/wouldn’t like to use the system
Participants received brief initial instructions for using Handpad and had the opportunity to practice. The sequence of the living room and bedroom scenarios was random. In each scenario, participants were asked to complete two tasks using Handpad. Task A was browsing a film database and searching for films to watch. Task B was browsing one or several news websites to read news for 10 min. Each scenario was randomly assigned a different task, the sequence of which was also random. After the two tasks, participants filled out the questionnaire and had a short discussion with experimenter. In all, the procedure took on average 45 min.
Attitude toward using technology
Conclusion and future work
This study proposed using a touch surface as an indirect input, since it can be easily and seamlessly embedded into a smart home. We have introduced Handpad as a new mouse emulation technique to indirectly control an onscreen pointer. This method uses hand movements detected on the touch surface to move an onscreen cursor. We designed a prototype to test user acceptance of Handpad in hedonic scenarios. This prototype provided good insight into how Handpad works in real-world scenarios. The user study showed that participants had high acceptance of Handpad. Though Handpad is still a prototype, these results show its potential as a viable alternative in hedonic scenarios. Fixed furniture can be equipped with this touch screen to create a holistic smart home control system. This concept can be refined in future studies.
TCM and HJH are responsible for conceiving of and writing this paper, and for analyzing the results presented herein. They took suggestions when necessary from PLP. All authors read and approved the final manuscript.
This study was funded by a Grant (No. 71661167006) from the National Natural Science Foundation of China.
The authors declare that they have no competing interests.
Availability of data and materials
Data will be made available to all interested researchers upon request.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Zimmermann S, Rümelin S, Butz A (2014) I feel it in my fingers: haptic guidance on touch surfaces. In: Proceedings of the 8th international conference on tangible, embedded and embodied interaction. ACM, New YorkGoogle Scholar
- Kern D, Marshall P, Schmidt A (2010) Gazemarks: gaze-based visual placeholders to ease attention switching. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New YorkGoogle Scholar
- Damaraju S et al (2013) Multi-tap sliders: advancing touch interaction for parameter adjustment. In: Proceedings of the 2013 international conference on Intelligent user interfaces. ACM, New YorkGoogle Scholar
- Dey AK et al (2011) Getting closer: an empirical investigation of the proximity of user to their smart phones. In: Proceedings of the 13th international conference on Ubiquitous computing. ACM, New YorkGoogle Scholar
- Ashbrook DL et al (2008) Quickdraw: the impact of mobility and on-body placement on device access time. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New YorkGoogle Scholar
- Cechanowicz J, Irani P, Subramanian S (2007) Augmenting the mouse with pressure sensitive input. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New YorkGoogle Scholar
- Omata M, Kajino M, Imamiya A (2009) A multi-level pressure-sensing two-handed interface with finger-mounted pressure sensors. In: Proceedings of graphics interface 2009. Canadian Information Processing Society, MississaugaGoogle Scholar
- Akamatsu M, Sato S (1992) The multi-modal integrative mouse: a mouse with tactile display. In: Posters and short talks of the 1992 SIGCHI conference on human factors in computing systems. ACM, New YorkGoogle Scholar
- Mistry P, Maes P (2011) Mouseless: a computer mouse as small as invisible. In: CHI’11 extended abstracts on human factors in computing systems. ACM, New YorkGoogle Scholar
- Kuribara T, Shizuki B, Tanaka J (2013) Sinkpad: a malleable mouse pad consisted of an elastic material. In: CHI’13 extended abstracts on human factors in computing systems. ACM, New YorkGoogle Scholar
- Yang X-D et al (2012) Magic finger: always-available input through finger instrumentation. In: Proceedings of the 25th annual ACM symposium on user interface software and technology. ACM, New YorkGoogle Scholar
- Lv C et al (2013) MK-pad: a Mouse + Keyboard input technique for distance interaction through a mobile tablet device. In: Proceedings of the 11th Asia Pacific conference on computer human interaction. ACM, New YorkGoogle Scholar
- Findlater L et al (2013) Age-related differences in performance with touchscreens compared to traditional mouse input. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New YorkGoogle Scholar
- Leftheriotis I, Chorianopoulos K (2011) User experience quality in multi-touch tasks. In: Proceedings of the 3rd ACM SIGCHI symposium on engineering interactive computing systems. ACM, New YorkGoogle Scholar
- Watson D et al (2013) Deconstructing the touch experience. In: Proceedings of the 2013 ACM international conference on interactive tabletops and surfaces. ACM, New YorkGoogle Scholar
- Sambrooks L, Wilkinson B (2013) Comparison of gestural, touch, and mouse interaction with Fitts’ law. In: Proceedings of the 25th Australian computer–human interaction conference: augmentation, application, innovation, collaboration. ACM, New YorkGoogle Scholar
- Zabramski S (2011) Careless touch: a comparative evaluation of mouse, pen, and touch input in shape tracing task. In: Proceedings of the 23rd Australian computer–human interaction conference. ACM, New YorkGoogle Scholar
- Au OKC, Tai CL, Fu H(2012) Multitouch gestures for constrained transformation of 3d objects. In: Computer Graphics Forum. Wiley, HobokenGoogle Scholar
- Venkatesh V et al (2003) User acceptance of information technology: toward a unified view. MIS quarterly. pp 425–478Google Scholar