Design and implement chords and personal windows for multi-user collaboration on a large multi-touch vertical display
© The Author(s) 2016
Received: 26 November 2015
Accepted: 22 June 2016
Published: 10 September 2016
KeywordsChords Multi-touch Collaboration Personal windows Multi-user Large screen
In recent years, there have been great advances in the accuracy and the available number of touches supported on large-scale multi-touch (MT) hardware technology (e.g., FTIR, laser-plane, DI, other combined installations). Although this allowed multiple users to interact with a relatively low-cost screen simultaneously, there is still limited user interface software technology support for group collaboration. For example, many MT systems (e.g., moving and resizing photos) assume that co-located users perform the exact same type of interaction on the screen, but there are applications (such as drawing on a shared canvas with different pens or working on maps) that require concurrent activity of diverse interactions. As a result, there is a need for user interfaces that support concurrent individual actions on a multi-touch screen without the need of special equipment.
A table/wall setting provides a large interactive visual surface for groups to interact together. It encourages collaboration and coordination, as well as decision making and problem solving among multiple users and therefore needs new kinds of interface . Since most applications are developed for desktop computers/mobile devices and for single-user interaction, new interaction techniques that support seamless collaboration on larger MT screens are needed. Most conventional metaphors and underlying interface infrastructures for single-user desktop systems have been traditionally geared towards single mouse and keyboard-based WIMP interface design and might not be suitable for large MT screens. For example, Nacenta et al.  carried out an exploratory study to determine how several types of established interaction techniques (such as drag-and-drop, radar views etc.) affect coordination and awareness in tabletop tasks and proved that the choice of interaction technique does indeed matter, affecting coordination, performance and preference measures. Elliott and Hearst  proposed a touch-sensitive interface should be used as a more appropriate interaction technique for larger interaction surfaces.
In this paper, we reflect on the need of such interfaces for multi-touch screens and propose a technique in order to improve group work on a MT screen: the combination of chord interaction along with personal action windows for multiple users. Previous research has highlighted the need for a novel set of MT programming toolkits  being reusable . Thus, we have designed and developed a novel technique in an open-source library and evaluated its quality for group collaboration with a novel experimental task.
In summary, the main contribution of this research is a) the design and b) the development of multi-touch chords interface along with personal action windows in a collaborative environment as a seamless identification and interaction technique for large vertical MT displays.
Chords and personal action windows
In the following subsections, we firstly describe the related work concerning the chord interaction technique and the personal windows interface and then, we demonstrate the need for a toolkit that can handle these multi-user multi-touch techniques and describe what experimental task is needed in order to evaluate this multi-user multi-touch interaction techniques.
Chorded input on multi-touch screens
Previous multi-touch research has focused on improving single user performance with chorded menus. Lepinski et al.  found that directional chords for marking menus performed significantly faster than traditional hierarchical menus. Bau et al.  proposed the Arpege contextual technique, in order to make it easy for users to learn multi-touch chord gestures. Wagner et al.  propose that even more complex posture chords with multiple fingers can be learned and memorized. Bailly et al.  found that the finger-count shortcuts perform better in menu selection, especially with expert users. Kin et al.  proposed a finger registration technique that can identify in real-time which hand and fingers of the user are touching the multi-touch device. In this way, they have introduced the Palm Menu, which directly maps commands or operations to different combinations of fingers and they have found that using finger chords has significant performance advantage.
According to a research conducted by Wobbrock et al. , when users were asked to propose their own gestures in a participatory design experiment, they claimed that they rarely care about the number of fingers they employ on a MT surface. This seems to contradict the theory behind chorded input we propose in this work. However, in that experiment, users had no previous experience with any MT surface and were novice users. In another research, Bailly et al.  prove that finger-count shortcuts can be learned faster than stroke shortcuts, confirming that people also easily learn to “express numbers with their fingers”. According to Kin et al. , chorded interaction techniques might be more suitable for users who have already been trained and as Kin et al.  demonstrate, using finger chords has significant performance advantage (compared to popup buttons).
Personal multi-touch areas
There have been many studies investigating territoriality in co-located MT tabletop installations  or in remote tabletop settings . According to observations, users usually prefer working on their own personal spaces and even partition the screen in such a way each user has its own private area to work in (as in Morris et al.’s replicated control widgets ). Additionally, in a tabletop environment, users tend to interact mostly in the area near where they are sitting . Based on these observations and due to the experience of users in traditional desktop environments, a personal area similar to those of a window was considered during the design of the proposed multi-touch interaction technique.
As both our experience and the taxonomy of multi-touch frameworks discussed in Krammer  shows, a lot of different multi-touch SDKs and Toolkits have been developed. Some of them are device-related (e.g. Microsoft Surface SDK or DiamondTouch SDK). On the other hand, there have been presented multi-touch Toolkits such as Python Multitouch (PyMT) or Multi-touch for Java, TouchScript (Unity), which are open-source, and platform-independent systems. There is no doubt that the Multitouch community is vivid and new toolkits are being developed constantly either by practitioners and hobbyists (e.g. Kivy) or researchers (e.g. uTableSDK).
All these multi-touch SDKs/toolkits support multiple touches. However, it seems that developers who designed the toolkits, were not really focused on one of the main characteristics of multi-touch surface, multi-user interaction. Developers did not build tools/widgets that can be used by multiple users simultaneously and thus augmenting collaboration. They relied on other developers for building their own tools by extending the toolkits. Indeed, some really interesting widgets such as multi-touch menus or pie menus etc can be found in the literature. However, once more, these widgets have been primarily developed for single user use and were evaluated accordingly.
Based on the studied literature, there is a need for more generic toolkits that can be used in various situations for co-located collaboration.
Experimental tasks in related work
Apart from the toolkits, there is a need for tasks that evaluate collaborative technologies . In our work, we are more focused on a task aimed mainly at examining the physical performance of the users instead of developing a decision-making or intellective type of task (such as the job-scheduling tasked proposed in ). There are some experimental tasks in the literature such as the jigsaw collaborative puzzle . Based on the relative literature, drawing stands out as a relatively representative task of a collaborative application for multiple users, either in a co-located environment (e.g.  as in our case), or in remote environments (distant drawing, e.g. ). But we finally consider using a more simplified drawing task, like that on Dillon’s et al.  experiment, because researchers need to gather more data on user behaviour, preferences and strategies. Especially for a multi-user multi-touch interaction technique, researchers need a task that (a) would allow for simultaneous use of multiple users, (b) would urge users to constantly interact and select items from a hypothetical menu (as in the collaborative photo tagging task of Morris et al.  but without using any special equipment) and that (c) could be used on a large vertical MT screen and not being restricted for tabletop use.
ChordiAction toolkit and interaction design
In this section, we discuss the proposed interaction technique: we give the algorithm we have implemented and the interaction design of a non-intrusive software user-identification technique which we propose as a solution for simultaneous multi-user interaction on a multi-touch screen.
Our main aim was to promote the diverse and simultaneous use of multi-touch screen by multiple users. Additionally, our chord-interaction toolkit was designed to be configurable and reusable. Developers or researchers can customize the toolkit to adjust it to their own needs or experiments.
In this subsection, we describe an abstract algorithm of what we have implemented:
In the beginning, we have to define how the area in which we will apply our chord will be triggered. There are different options such as a double-tap or a long-tap event. In addition, we have to define other details such as the number of seconds that the system will wait in order to receive the chord or where the chord interaction area will be placed in relation to the interaction. In line 2, developer creates an event handler that monitors interactions and when the interaction that triggers the chord area takes place the event is triggered (line 4). Then, the system locates the place where the event took place (line 6) and reserves the space (line 7) in order to let the user perform his/her chord. Depending on the number of fingers inside the reserved area (line 9) the system performs the appropriate action (line 11).
Interaction design and application development
Our goal is to allow user to work (performing actions e.g. selecting an option from a menu) together in parallel, independently or sequentially, without the need to negotiate turns. Initial experiments  proposed the transition from a fixed selection technique, where user simply clicks/touches an item in a static menu in order to select it, to multi-user chorded selection where user makes use of a circular chording area that is temporarily (for a number of seconds) reserved whenever he/she touches the MT screen. In a multi-user environment, users should dynamically reserve multiple small circular areas, which could be the size of ones’ palm (diameter is 15 cm). In that small area, user has to perform a chord for the selection of the appropriate menu item or function. With the support of a status indicator on the menu bar, users are able to understand which menu item must be chosen and how many fingers they have to touch on the surface in order to select it.
Multiple users are able to touch different parts of the screen and then different small areas will be temporarily reserved for chorded modifiers accordingly. The reserved area is a circular area around the first touch of the user and it is about the user’s hand size, being easy for the user to touch the appropriate number of fingers and thus applying the chord responsible for the selection of the menu item he/she desires. We have designed a multi-user MT component that allows users to touch multiple fingers on any place of the display. Each time a user makes a selection, the appropriate action/function can be activated.
The following lines of code demonstrate the use of ChordiAction toolkit in an example application:
In order to make use of the ChordiAction toolkit we have to import Pymt library and the toolkit module as in line 2. In the fifth line a new ChordiAction object is created. The interaction_style is the only parameter that is needed in order to create a ChordiAction object. In this example the interaction style is ‘double_tap’. That means that in order for the user to enable the chord interaction technique the user has to double tap the screen. Other possible choices are ‘single_tap’ and ‘long_tap’ (where user has to continuously touch the screen for more than 0.5 s to enable chord interaction).
In line 6 the ChordiAction is added in the widget tree. From this point on, whenever a user double-touches the screen a circle within which user has to articulate the desired chord pops up. When user lifts all his fingers from the circular area where he/she screen can articulate a chord, the ChordiAction toolkit creates an event (line 7). In order to catch the event and do the appropriate actions the chord_done function is used as in line 8. This event returns two variables, the position where the chord was articulated and the selection that has been made (number of fingers). In this simple example, in line 8 and 9 the appropriate values are just printed on the screen for every chord made by the user.
Stimulating interaction with chords in an experimental task
Based on our literature review, Dillon’s et al.  experiment is close to our needs for gathering more data on user behaviour and preferences. By extending this experiment, we were led to a dot-to-dot type of drawing task. The type of application we developed allows for collaboration along with interference among users during simultaneous interaction on the MT screen, the ideal combination for our experiment. Moreover, users are familiar with this kind of task and they can focus more in the interaction technique and not in trying to understand the task.
In Fig. 9c the available colours are depicted. There is a status indicator that is presented on the upper part of the MT screen (Fig. 9d) in order to help users to remember chord modifiers and the respective colours. As it is shown in the Fig. 9c, with three touches users are able to draw a blue line. With a four-finger chord users can draw a yellow line, while using all five fingers allows them to draw a purple line. Even though the status indicator seems to be obsolete, it may be useful for new users that have not been familiarized with the chord modifiers technique.
The main requirement for the first experimental user task was to enforce users to perform several chords, as well as to negotiate the interaction over shared screen spaces and tasks. Even though this is a simplified drawing application with only one type of shape, the aspect of using chord modifiers in a collaborative environment is sufficiently represented, due to the fact that users must constantly use chords to change the colour of the line to be drawn inside the personal windows. Additionally, as Fig. 9a demonstrates, two neighboring dots are always of different colour and thus users are forced to articulate chords each time they want to draw a new line, since two consecutive lines must have different colours.
Figure 10b shows what happens when user lifts all his fingers from the MT screen. Users should have their own private pop-up action window in which they can perform the action they selected using the chorded modifier. That is, user can only draw in such pop-up windows. Even this seems as a restriction for the user, it makes it feasible for more than one user to work simultaneously performing different actions on the screen. As it is also shown in the figure, in the left window, only blue lines can be drawn (as indicated from the small icon in its low-left corner). Alternatively, the right window is a drawing-yellow-line one based on the chorded selection of the user (Fig. 10a).
The task described above makes sure that users are going to make different actions (use different chords) since, for each line to be drawn, a different action is needed (use different windows). Furthermore, users should coordinate their actions in order to effectively integrate the task as long as (in this case, both chording circular areas and windows require space and thereby) negotiation is needed so as to avoid a cluttered work-area for all users. This type of task fulfills all our criteria being an effective instrument to evaluate chord-interaction as long as other techniques for exploring users’ collaboration on a MT surface.
Categories and codes that occurred during the experimental task
Partitioning the screen
Divide the screen
Divide the workload
Parallel and synchronous interaction
In-turn type of sequential interaction
Articulate the chords in the corner
Articulate the chords in the main interaction area
Divide and conquer—partitioning the screen or sharing the workload
During our exploratory experiments with the dot-to-dot drawing task, we observed that despite the fact that users were able to work on a private area (since, by articulating the appropriate chord modifier, they were given a specific window in which they could draw), they tended to partition the screen notwithstanding. Most of the couples used verbal communication before the task in order to either divide the screen or the workload of the drawing task so as to complete it as fast as possible. Thus, there were users followed the “divide the screen” strategy and therefore said “I connect all the dots in my area (in the right half of the screen) and you connect the dots in the left half.” and users that chose the “divide the workload” strategy: “There are 22 dots, you connect the first 11 and the rest are mine.” However, as it can be seen in Fig. 10a, a lot of dots are accumulated in the center of the pattern purposely, not allowing efficient partitioning of the screen, in an effort to observe users’ interaction during colliding situations. There was no couple interacting without a previously developed plan. We suppose that by asking the users to be as fast as possible, we were led to these two different techniques in order for users to improve their performance. Of course, as soon as they embarked on the task different behaviours were observed.
Space negotiation and conflict resolution
Users were reluctant to move their window to others’ personal space. When both of the users had to draw in the center of the screen they almost changed their strategy. For instance, when one user was drawing a line, the other one was in the corner of the screen trying to form the appropriate chord (as depicted in Fig. 11c). And when he had his window popped-up, he was moving it while the first user was articulating his own chord in his own corner. That said, interaction seemed to change from parallel and synchronous drawing to an in-turn type of sequential interaction even though there was constant input from both of the users simultaneously.
Users were hesitant to simultaneously touch the shared controls. For example, in some cases one user could have enlarged his window more that normally expected, breaking the territory rules. In these situations, users avoided to close others’ window and withdrew waiting for others to close it up or continued drawing wherever there was enough space for them. Even if Peltonen et al.  claim that these situations prove to be funny and produce enjoyment for users interacting with an entertainment installation, we are convinced that in a more businesslike environment they could lead to frustration. In one of these moments during our experimental task, one of our users said “This is not working!”.
Collocated collaboration with chords and personal windows
Developing tools and applications for a multi-touch surface is considered to be a complicated procedure due to the limitations and challenges of a larger multi-touch screen. We propose the use of ChordiAction, a collaborative user interface toolkit that can be used in various situations for co-located large-scale MT screens. According to Elliott and Hearst , larger multi-touch screens need novel interaction techniques in order for the users to interact in larger work-areas. Wall-sized larger displays can be used up close by several users at a time, they offer high resolution for working up close, and they provide sufficient space for varied collaboration styles . The proposed technique in this work shortens the distances and thus can improve selection time or help in avoiding possible user conflicts (as in Figs. 2, 3, 4). Chord interaction techniques have been also used in previous studies (e.g.  or , but the main focus of the researchers was on single-user mode, while we aim at collaboration among users while interacting in parallel on a MT screen.
In addition, by employing personal windows, the system we propose is able to identify the user, or alternatively the user is able to perform simultaneously different actions in parallel with other users. We have also employed a transparent layer as a see-through interface such as the one in Bier et al’s  (toolglass widgets) that lies between the application and the fingers of the user. This type of window makes it feasible for the system to identify the user (personal window) or the appropriate action (action window) unobtrusively, since it is a result of the appropriate chord articulated previously by the user. Moreover, in the case of using our system for menu selection, chord interaction along with action windows can be considered as a virtually replicated menu interface, since every user can select from a (virtually positioned) menu in a position that is useful for him/her. In our system, users instead of interacting with a centralized menu that (being static) cannot be shared efficiently (e.g. the example discussed in introduction section), they are able to perform different actions simultaneously without interference among them.
To conclude, the interaction technique we propose (a) allows for simultaneous diverse interactions from multiple users, (b) shorthens the distances and thus can improve selection time in larger multi-touch screens (in the case of a menu-selection technique), (c) helps avoiding possible user conflicts by identifying the user or the action due to personal action windows, (d) is a low-cost software solution that works unobtrusively without any training sessions or additional equipment. Researchers and developers can easily use the proposed interaction technique by using the ChordiAction toolkit in their multi-touch applications.
Dot-to-dot collaborative task
In addition to the advantages of the proposed method we described in previous sections, we introduced an experimental task in order to evaluate multi-user interaction toolkits -like the one proposed in this work- on multi-touch surfaces. Based on our observations, the dot-to-dot collaborative task we chose while developing the evaluation strategy of the proposed interaction techniques proved to be a valid decision. Despite the fact of being mainly a physical performance task—a task that involves physical behaviour as opposed to symbolic, or mental manipulations,  or —it produced valuable results and shed light to users’ strategies while collaborating on the screen. Users tried to improve their time, used verbal communication, pointed others what to do, helped each other, worked in parallel or isolated in a partition of the screen and tried to resolve conflicts. Users were mainly focused on the interaction and on completing the puzzle-style task and saw it as a battle between them and the other teams that had completed the task previously with a better time. Based on our experience with the dot-to-dot collaborative task, its main advantages are: (a) it is a simple to administer task, (b) it is a traditionally played game and thus does not demand knowledge from the users, (c) it is fast and enhances competitiveness among different teams or cooperation among teams’ members, (d) it demands coordination skills and it also demands from the users to be aware of what other users do, (e) it allows for either working in parallel or performing joint work (as the task used by ) and (f) it is effective in producing reproducible results.
One more limitation of our menu selection technique is that it allows up to eight different menu items considering that we have ten fingers and we must use at least two fingers for basic interaction. But, as it is shown by Kiger  eight different items in a menu is an effective number of menu elements and as far as the depth of the menu is concerned, a MT application could reserve a suitable chord of fingers, which could permit it. Instead of directional chords that were used by other researchers [7, 8], we propose the use of static chords as a much easier technique for the majority of users.
In our future work, we are going to evaluate our work quantitatively and compare it to other interaction techniques designed for large MT surfaces. We plan to evaluate the proposed interaction techniques in an educational context as a case-study. Additional work is planned in order to measure the effects of using the toolkit by advanced versus trained users. Additionally, we would like to go one step further on the chorded interaction allowing the use of either interchangeable interaction or bimanual chords  for multiple users in a larger MT screen and thus giving more free space to users in order to articulate the chords and furthermore increase users’ selectable space. Chorded interaction would also function in network-connected tabletops as a synchronous collaboration technique for multiple users interaction since the essential guidelines by Tuddenham and Robinson  for effective collaboration between distributed tabletops were followed during the design of the chord interaction technique.
In this work, we proposed and implemented a chording technique that enables higher levels of multi-user diverse interactivity, collaboration and awareness when used along with personal action windows. Users’ interaction techniques were investigated and issues such as conflict resolution strategies were discussed. The main contribution of this work is the design and implementation of this novel seamless identification and interaction technique that is scalable and supports diverse multi-touch interactions especially on larger MT surfaces.
We evaluated this technique in vertical multi-touch surfaces, but it can be used in tabletop systems as well, since it was designed bearing in mind the general characteristics of MT surfaces (multi-user interaction, user orientation, user movements etc.) and how users work on them no matter their setting.
In this research, we examined the idea of using chords along with personal/action windows in a MT collaborative environment for menu selections as a non-intrusive technique, as well as we designed and implemented an easily repeatable synthetic experimental dot-to-dot task that demonstrates the potential of this technique and can be used as a tool to evaluate other techniques on large MT surfaces by other researchers. This research also demonstrates the need for designing and implementing toolkits and applications that are dedicated to MT interaction style and take advantage of the unique characteristics of a MT surface. For instance, although a chording system was absent from MT Toolkits, we believe that MT dedicated interaction techniques like this, should be integrated in future MT Toolkit updates for being: (a) a simple ad hoc solution, (b) fast in comparison to traditional interaction techniques, (c) atomic and thus suitable for multi-user interaction and (d) flexible and thus scalable.
In this paper, we reflected on the need of such interfaces for multi-touch screens and demonstrated that a combination of chord interaction along with personal action windows for multiple users can be a technique suitable for group work on a larger MT screen.
IL had the main idea and wrote most of this paper. He has also developed the toolkit and conducted the experimental task. KC contributed in structuring of the paper and polishing the introduction section. He also participated in discussions and provided corrections and extended feedback. LJ helped with the experimental task and valuable discussions. IL was also the main responsible for the data collection and analysis. All authors read and approved the final manuscript.
We would like to thank our pilot users who participated in our experiments.
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Haller M (2008) Interactive displays and next-generation interfaces. In: Becta emerging technologies for learning, vol 3, p 91–101
- Nacenta MA, Pinelle D, Stuckel D, Gutwin C (2007) The effects of interaction technique on coordination in tabletop groupware. In: Proceedings of Graphics interface. ACM Press, New York, p 191–198
- Elliott A, Hearst M (2002) A comparison of the affordances of a digital desk and tablet for architectural image use tasks. Int J Hum Comp Stud 56(2):173–197View ArticleGoogle Scholar
- Kurtenbach G, Buxton W (1993) The limits of expert performance using hierarchic marking menus. In: Proceedings of the INTERACT’93 and CHI’93 conference on human factors in computing systems. ACM, New York, p 482–487
- Wigdor D, Fletcher J, Morrison G (2009) Designing user interfaces for multi-touch and gesture devices. In: Proceedings of the 27th international conference extended abstracts on Human factors in computing systems—CHI EA’09: 2755
- Luyten K, Vanacken D, Weiss M, Borchers J, Izadi S, Wigdor D (2010) Engineering patterns for multi-touch interfaces. In: Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems (EICS ‘10). ACM Press, New York, p 365–366
- Lepinski GJ, Grossman T, Fitzmaurice G (2010) The design and evaluation of multitouch marking menus. In: Proceedings of the 28th international conference on human factors in computing systems—CHI’10, ACM Press, New York, p 2233–2242
- Bau O, Ghomi E, Mackay W (2010) Arpege: design and learning of multifinger chord gestures, CNRS-Université Paris Sud—LRI. Rapport de, Recherche (1533) Google Scholar
- Wagner J, Lecolinet E, Selker T (2014) Multi-finger chords for hand-held tablets: Recognizable and memorable. In: Proceedings of the 32nd annual ACM conference on human factors in computing systems. ACM Press, New York, p 2883–2892
- Bailly G, Lecolinet E, Guiard Y (2010). Finger-count and radial-stroke shortcuts: two techniques for augmenting linear menus on multi-touch surfaces. In: Proceedings of the 28th international conference on Human factors in computing systems—CHI’10. ACM Press, New York, p 591–594
- Au OKC, Tai CL (2010) Multitouch finger registration and its applications. In: Proceedings of the 22nd conference of the computer-human interaction special interest group of Australia on computer-human interaction, OZCHI’10, ACM Press, New York, p 41–48
- Wobbrock JO, Morris MR, Wilson AD (2009) User-defined gestures for surface computing. In: Proceedings of the 27th international conference on human factors in computing systems (CHI ‘09). ACM Press, New York, p 1083–1092
- Scott SD, Sheelagh M, Carpendale T, Inkpen KM (2004) Territoriality in collaborative tabletop workspaces. In: Proceedings of the 2004 ACM conference on computer supported cooperative work. ACM Press, New York, p 294–303
- Tuddenham P, Robinson P (2009) Territorial coordination and workspace awareness in remote tabletop collaboration. In: Proceedings of the 27th international conference on human factors in computing systems, ACM Press, New York, p 2139–2148
- Morris MR, Paepcke A, Winograd T, Stamberger J (2006) TeamTag: exploring centralized versus replicated controls for co-located tabletop groupware. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM Press, New York, p 1273–1282
- Ryall K, Forlines C, Shen C, MR Morris (2004) Exploring the effects of group size and table size on interactions with tabletop shared-display groupware. In: Proceedings of the 2004 ACM conference on computer supported cooperative work. ACM Press, New York, p 284–293
- Kammer D, Keck M, Freitag G, Wacker M (2010) Taxonomy and overview of multi-touch frameworks: architecture, scope and features. In: Proceedings of the EICS’10 workshop on engineering patterns for multi-touch interfaces
- Tan DS, Gergle D, Mandryk R, Inkpen K, Kellar M, Hawkey K, Czerwinski M (2008) Using job-shop scheduling tasks for evaluating co-located collaboration. Personal Ubiquitous Computing (2008), 12, p 255–267
- Kraut RE, Gergle D, Fussell SR (2002) The use of visual information in shared visual spaces: informing the development of virtual co-presence. In: Proceedings of the ACM conference on computer-supported cooperative work 2002. ACM Press, New York, p 31–40
- Zhang H, Yang XD, Ens B, Liang HN, Boulanger P, Irani P (2012) See me, see you: a lightweight method for discriminating user touches on tabletop displays. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM Press, New York, p 2327–2336
- Ishii H, Kobayashi M (1992) ClearBoard: a seamless medium for shared drawing and conversation with eye contact. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM Press, New York, p 525–532
- Dillon RF, Edey JD, Tombaugh JW (1990) Measuring the true cost of command selection: techniques and results. In: Proceedings of the SIGCHI conference on human factors in computing systems: empowering people. ACM Press, New York, p 19–26
- Leftheriotis I, Chorianopoulos K (2011). Multi-user chorded toolkit for multi-touch screens. In: Proceedings of EICS‘11, ACM SIGCHI symposium on engineering interactive computing systems, ACM Press, New York, p 161–164
- Epps J, Lichman S, Wu M (2006) A study of hand shape use in tabletop gesture interaction. In: CHI’06 extended abstracts on human factors in computing systems. ACM Press, New York, p 748–753
- Bier EA, Stone MC, Fishkin K, Buxton W, Baudel T (1994) A taxonomy of see-through tools. In: Adelson B, Dumais S, Olson J (eds) Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence (CHI ‘94). ACM Press, New York, p 358–364
- Glaser BG, Strauss AL (2009) The discovery of grounded theory: strategies for qualitative research. Transaction Books, PiscatawayGoogle Scholar
- Peltonen P, Kurvinen E, Salovaara A, Jacucci G, Ilmonen T, Evans J, Oulasvirta A, Saarikko P (2008) Itʼs mine, don’t touch!: interactions at a large multi-touch display in a city centre. In: Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems. ACM Press, New York, p 1285–1294
- Jakobsen MR, Hornbæk K (2014) Up close and personal: collaborative work on a high-resolution multitouch wall display. ACM Trans Comp Hum Interact (TOCHI) 21(2):11Google Scholar
- Hornecker E, Marshall P, Dalton NS, Rogers Y (2008) Collaboration and interference: awareness with mice or touch input. In: Proceedings of the 2008 ACM conference on computer supported cooperative work. ACM Press, New York, p 167–176
- Kiger J (1984) The depth/breadth trade-off in the design of menu-driven user interfaces. Int J Man Mach Stud 20(2):201–213View ArticleGoogle Scholar
- Tuddenham P, Robinson P (2007) Distributed tabletops: supporting remote and mixed-presence tabletop collaboration. In: Second annual IEEE International workshop on horizontal interactive human-computer systems (TABLETOP’07). IEEE, New York, p 19–26
- Leftheriotis I, Chorianopoulos K, Jaccheri L (2012) Tool support for developing scalable multi-user applications on multi-touch screens. In: Proceedings of ITS’ 2012, ACM international conference on interactive tabletops and surfaces, ACM Press, New York, p 371–374