Skip to main content

Collaborative behavior, performance and engagement with visual analytics tasks using mobile devices

Abstract

Interactive visualizations are external tools that can support users’ exploratory activities. Collaboration can bring benefits to the exploration of visual representations or visualizations. This research investigates the use of co-located collaborative visualizations in mobile devices, how users working with two different modes of interaction and view (Shared or Non-Shared) and how being placed at various position arrangements (Corner-to-Corner, Face-to-Face, and Side-by-Side) affect their knowledge acquisition, engagement level, and learning efficiency. A user study is conducted with 60 participants divided into 6 groups (2 modes \(\times\) 3 positions) using a tool that we developed to support the exploration of 3D visual structures in a collaborative manner. Our results show that the shared control and view version in the Side-by-Side position is the most favorable and can improve task efficiency. In this paper, we present the results and a set of recommendations that are derived from them.

Introduction

Visualizations are external aids that are intended to support users’ analytical and exploratory activities [1]. Increasingly, these activities are conducted by teams. Supporting collaborative activities remains one of the great challenges within interactive visualizations and visual analytics [2, 3]. Collaborative visualizations aim at facilitating analytical activities by more than one user, and allocating different parts of the task to different people, either distributed in various places or co-located in the same physical space.

Researchers have been exploring the use of mobile devices, particularly tablets, to overcome the limitations of large displays for collaborative activities [2, 4]. Today’s mobile devices are quite powerful. With mobility, portability, wireless connectivity, and touch-enabled input, tablets have been widely used and are claimed to be suitable devices to support co-located collaborative workspaces [5, 6]. There has been increasing amount of research focusing on techniques to better support co-located collaborative activities with tablets [7,8,9,10,11], including for learning in groups. Researchers and educators have been trying to incorporate tablets in their teaching activities [12,13,14,15,16]. For example, students as a group could complete a task by simply interacting with their own device since all the interactions could happen within one single virtual space. Such an approach has been well received by teachers and students because it can lead to a general improvement in students’ learning outcomes [17].

Most collaborative applications are about drawing or document annotation [12, 18,19,20], while research on the use of mobile tablets to enable collaborative exploration of visualizations of more complex concepts is still largely underexplored. Although some prior research explored the effect of sharing content among users on task performance [21], there is little research that has compared Shared and Non-Shared mode using tablets in learning settings. Besides, previous work has identified the arrangement of users as one of several factors that influence co-located collaborative tasks [22, 23]. Some early research on position arrangement suggesed that different collaborative coupling and position distance would affect user behavior and performance during collaboration. However, these related work investigated the position arrangements and collaborative coupling around a shared device (like a large tabletop or wall display) [24,25,26]. Research that focused on mobile devices is still largely underexplored. Therefore, this research investigates cross-tablet collaboration in a co-located setting. We focus on two interaction modes and for three position arrangements, and explored how they affect collaborative tasks with visualizations in a co-located setting. We investigate the effect that two factors, shared/non-shared view/control and position placement, have on the learning improvement and engagement of learners. To do this, we have developed a tool that can support group exploration of 3D geometrical shapes in terms of their individual structural properties and transformative processes. We run an experiment with two collaborative modes: (1) shared and (2) non-shared interaction and view, and having three position arrangements: (1) Corner-to-Corner (C-C), (2) Face-to-Face (F-F), and (3) Side-by-Side (S-S). The shared mode supports two users in interacting simultaneously; they share the view and control of the visual objects. In the non-shared mode, one user cannot see from his/her tablet what the other is doing and the control is not shared.

The main contributions of the paper include: (1) an application for mobile tablets that allow users to explore 3D geometrical shapes in terms of their individual structural properties and transformation processes; (2) an exploration of task performance, knowledge acquisition, engagement levels, and collaboration behaviors and patterns of users using mobile devices to explore visualizations; (3) an understanding of the positive affordances and limitations of each mode of interaction and position arrangements; and (4) a set of implications from our findings for future developments of collaborative visualizations and applications in mobile devices that can support positive collaborative interaction in co-located settings.

Background and related work

Co-located collaborative visualizations

Collaborative scenarios can be generally categorized along two axes: space (co-located vs. distributed), and time (synchronous vs. asynchronous) [2, 27]. Research that has focused on co-located collaboration can be divided into two groups based on the display type: single display (normally large displays) [28, 29], or multiple displays (based on networked devices) [2, 30, 31]. Some systems have explored integrating mobile devices (for individual view) with tabletops (for group view) in solving collaborative tasks [32,33,34]. Selwyn-Smith et al. [35] explored how cross-device environments consisting of digital tabletops, mobile tablets, and laptops can support co-located collaborative tasks in block-based programming. Besides, a study [36] with three different conditions (a digital tabletop, personal tablets, and both tabletop and personal tablets) was conducted to understand how these devices support sense-making for groups. Their results show a negative correlation between utilization of personal devices and group performance. Numerous useful visual analytics tools have been designed to help domain experts solve analytical problems. For example, Dong et al. [37] designed a collaborative touch-table application (iPCA-CE) by adopting an existing single-user desktop analytical tool (iPCA). With the system, users can actively transit from individual desktop to shared collaborative environments without losing track of their analysis. Konkel et al. [38] explored collaborative and interactive visualizations of genomic and related scientific datasets. For their tablet-based application, they investigated the exploration of genomics work using a vertical screen, three tablets, six Sifteobased tokens, and a smartphone. Bishop et al. [39] presented Construct-A-Vis, a tablet-based tool designed to help explore the feasibility of group-based free-form and constructive visualization activities for elementary school children. They noted that shared visualization processes further prompted productive discussions and collaborative behaviors.

Isenberg et al. [2] defined collaborative visualization as the shared use of computer-supported interactive visual representations of data by more than one person to do sense-making tasks. Some researchers have focused on the challenges in visual representations and tried to provide different representation possibilities to explore collaborative aspects in co-located tasks (e.g. [40, 41]). Desolda et al. [42] explored composition techniques to allow combining spatial-aware mobile devices to foster sensemaking activities. Phil et al. [43] presented a visual analytic tool, ActiVAte, which can facilitate human-machine collaboration, inspection, transparency, and trust of the learning process through human-machine collaboration. The VizCept system, for example, allows users to work in their own workspace and collaborate with others using representations like concept maps and timelines [44]. Others have focused on interaction techniques to support collaboration like linking meta-visualizations to help collaborators be aware of each other’s actions [45], and to eliminate the influence of fixed orientations in collaborative tabletops [4].

Some researchers began to explore the use of tablets to support the collaborative exploration of visualizations. Hemmings et al. [46], for example, have developed the distributed system IDCVS which allows real-time collaborative analysis with visualizations of large data sets for tablets. While representing a positive development, the studies of collaborative visualizations on tablets are still very limited compared to those for tabletops and research is still needed to understand how they should be designed and deployed.

Shared vs Non-Shared view and control

Shared virtual workspaces [36, 47, 48] and augmented reality (AR) systems [49, 50] allow users to concurrently interact with visualized data. Greenberg [51] has surveyed and discussed systems that are aimed at providing shared views among distributed worksites. As Isenberg et al. [2] highlighted, co-located synchronous collaboration comes with inherent interaction challenges that arise when multiple people have the possibility to synchronously interact together. These interaction conflicts bring to the foreground social norms and conventions that are difficult to overcome. The shared view mode of tablets is a possible solution to solve these issues, because when using multiple tablets users can have a shared view, while still choose to be located at a comfortable distance of other users.

How the tasks are presented on the devices when users need to collaborate can affect their behaviors and performance. In addition, they are affected by their perception of territoriality issues and how the workspaces (both individual and group) are provided in these devices [52, 53]. For example, Larsen-Ledet et al. [54] suggested that territoriality issues related to users’ workspaces could serve as a framework to guide the development of social structures and maintain them during cooperative activities. Some researchers have explored the collaborators’ behaviors with one shared tabletop workspace [55, 56] and cross-device workspaces [57, 58]. There seems to be little research that has explored shared view and control of collaborative visualizations in co-located settings [59, 60]. Hinckley [61] has developed several interaction sensing techniques that support transitions between the shared and personal working areas, for example, tilting the tablet toward oneself to enlarge the personal workspace. Similarly, Kaufmann [62] has investigated two modes, independent (i.e., every student can only see the elements constructed by him/herself) and collaborative (i.e., everything is visible to everybody), for Construct3D, a geometry construction tool designed for mathematics and geometry education. These papers proposed systems that share related or the same views across devices but have not provided any user evaluation. Huang et al. [63] developed the coisTable tabletop system that supports co-located collaborative activities. It divides personal and shared workspaces using additional physical representations that are intended to help users distinguish with ease the boundaries of different application spaces.

Chung et al. [64] presented a cross-display visual link technique which can allow users to see connections of related information across displays. Their results showed that the visual links of tasks across multiple displays effectively helped users organize and synthesize related information scattered across devices. Kraut et al. [65] have argued that a shared visual space is essential for solving complex collaborative visual problems because it can improve the awareness of the state of task and facilitate communication of the group. In addition, results in [66] indicate that there exists an overall positive attitude towards paired tablets, but it has not led to significantly better task completion times because their views are not entirely synchronized and shared. In short, the above review shows that there seems to be very limited work that has investigated thoroughly the effects of shared and non-shared view and control on co-located collaborative visualizations in mobile devices.

Position arrangement during collaboration

Arrangement of users and collaborative coupling refer to where they are placed, and the way collaborators interact with each other. It mainly reflects users’ need to work tightly with or independently from one another. How users are arranged influences their performance in completing collaborative tasks. For example, Yiannis et al. [67] conducted a study that investigated the effects of strong versus loose coupling on middle school students’ learning performance. Their findings showed higher conceptual learning for the students participating in the strong coupling condition. Early research on collaboration around physical tables determined that face-to-face or right-angled seating was preferred for conversations due to their support for visual contact [68]. Previous work on collaboration around a tabletop display found that a face-to-face arrangement better supported non-verbal communication than sitting side-by-side [69]. Rodden et al. [70] noted that a side-by-side arrangement helped to ease the social awkwardness of strangers, as there was less expectation for direct eye contact. Tang et al. [26] have identified six coupling styles in tabletops and concluded that groups with these coupling styles when working together prefer common and global views. Isenberg et al. [24] have extended these coupling patterns into eight and grouped them as close and loose collaboration. They observed that pairs with frequent synchronizations produced discoveries that are more successful in solving experimental tasks.

Jakobsen and Hornbæk [25] have explored coupling patterns of collaborators when working around a wall display. They observed users’ visual attention, proximity, and verbal communication and suggest that proximity between members is related to how tightly coupled they are working together. Liu et al. [60] have explored five coupling styles around these displays but their styles correspond to very different dimensions–collaborative coupling and shared interaction support. Their results show that a shared interaction technique can facilitate collaboration by allowing collaborators to work more tightly together even when not in close proximity.

Research on collaboration coupling in tablets has also looked at gaze or eye contact and verbalization [49]. Plank et al. [66] have observed how users positioned their tablets from their personal territory to group territories by rotating or tilting the tablets to share content. Marquardt et al. [9] have explored the coupling styles for tablet users using two sociological constructs: F-Formations (the distance and relative body orientation among multiple users) and micro-mobility (how people orient and tilt devices towards one another for sharing ideas).

Previous research has found that different coupling patterns yield different communication strategies together with how closely the collaboration is. As one crucial element of coupling patterns, the relative positions of collaborators reflects the relationships between them and how they work together. However, how position arrangements influence collaboration behaviors and sense making in a co-located setting remains largely under-explored. There does not seem to be that much work examining whether the different positions will result in different collaborative behavior, experience, and performance. Our research aims to extend the investigation of position arrangements of collaborative coupling with mobile tablets under two settings, shared and non-shared view, to further explore the collaborators’ behaviors and interaction patterns.

Solid visualization tool

Visualization tools can allow learners to visualize and manipulate abstract geometric structures and their spatial relationships and are useful to help them with their mathematical reasoning and learning [71,72,73]. In addition, peer collaboration is useful for challenges that require spatial reasoning [74]. Therefore, we developed a visualization application to facilitate the collaborative process of exploring, analyzing and learning 3D geometric shapes (see Fig. 1). These solids can be obtained from one another and their relationships are multilayered, not only dealing with planes of symmetry but also how the shapes can be derived from each other by truncating/augmenting their vertices and edges [75].

Fig. 1
figure 1

The interface of the 3D solids visualization tool

Visualization application overview

The tool has been developed in Unity3D. The main goal is to assist users to visualize and explore 3D shapes. This tool is used to allow us collect experimental data and explore the differences among the various conditions. As a visualization tool, no direct or explicit instructions about how they should interact with the tool are provided. Users have to explore and analyze by themselves using the tool’s visual interactive functions and interpret the results and draw their own conclusions [76, 77]. Figure 1 shows the interface of the tool, which is implicitly divided into 5 sections: (1) Synchronize function: The transformation of three solids will be synchronous when changing from one same shape to another same shape; (2) User ID: It indicates how many users are using the tool at the same time; (3) 3D solid visualizations: There are three solids, cube, tetrahedron and Octahedron; (4) Solid transition maps (STM): They can support active navigation within and between visualizations. Thumbnails of shapes are connected by lines to indicate the transitional processes of how shapes can be derived from each other; and (5) Network Connection: Shared/Non-Shared mode can be chose by clicking “Connect/Disconnect”.

Solid visualizations

The 3D solid visualizations placed in the middle of the interface present structural information of the solids. Each solid is rendered with three different colors indicating the process of truncating their vertices and/or edges. On a corner of each solid, there is a modified solid transition map that shows the current stage of transformation of the solid. Also, it serves as an implicit link to the map located below. Figure 2 shows an example of this relationship: the cube with the map attached to the top-right corner (LEFT) and its transition map (RIGHT).

Fig. 2
figure 2

The introduction of the basic solid. (LEFT) Rhombi-truncated cuboctahedron obtained by truncating all the vertices and edges of a cube; (RIGHT) Solid transition map of the cube indicating the current state of the transformed solid. The black dot on the solid’s triangle (LEFT) shows the location of the current solid on its transition map (RIGHT)

Morphing of the solids

Users can dynamically and continuously morph or transform a 3D solid via the STM. Each map has an interactive selected solid (see Fig. 2 for the highlighted circles on the maps). Users can interact with this selected solid and move it anywhere on the map. All changes are reflected on the solid—this interaction is referred to as “indirect manipulation”, where users communicate with an object via a second object [78]. On the other hand, direct manipulation, which gives a more direct engagement with an object [79], is implemented as an interactive dot on the local STM of each solid. Using the dot users can morph the solid directly by freely moving it within the local STM. By embedding both direct and indirect manipulation, users can choose the STM to morph the shape indirectly, which provides a clearer sense of global transformations, or choose the interactive dot to have direct engagement with the enlarged solids.

Dynamic-linking of multiple visualizations

Twin Solid is the shape that has the same structural properties as the current selected solid but is different in how it is obtained [71]. Such property represents more abstract complex connections between multiple STM, as identical shapes can be obtained from different base solids but through different transformation processes. In our tool, the STM not only can highlight the selected solid, but also the twin solid(s). As shown in Fig. 3, for any selected solid on a map, the corresponding twin solid(s) will also be shown and highlighted on the other two maps. Such function can facilitate global knowledge acquisition across multiple maps–that is, the solids and the maps all become dynamically linked to help visualize the existence of twin solids. Users can switch off the “Synchronization” toggle (on the left corner of the tool) to turn off the dynamic linking of visualizations.

Fig. 3
figure 3

An exploration scenario. It shows the selected solid (currently on the Octa-Map, most left solid) and one twin solid on each of the other two maps (the Cube and Tetra maps)

Shared interaction and view between multiple tablets

As mentioned before, the application has two versions, Shared and Non-Shared control and view. The shared mode supports two (or more) users to interact simultaneously with shared view and control of the visual objects using connected tablets so that they can see what each other is doing in real time. In the non-shared mode, one user cannot see from their tablets what the other is doing. If one user is interacting with an object, it is highlighted on the other user’s tablet in the Shared version, but no in the Non-Shared one (see Fig. 4).

Fig. 4
figure 4

The two diffrent modes of interaction. a) Interaction in the Shared version (synchronous change in two applications); b) Interaction in the Non-Shared version (asynchronous change in two applications)

Experiment design

Participants

Sixty students (32 females) aged between 18 to 28 years old (\(\text {M} = 19.90\), \(\text {SD} = 2.11\)) were recruited to participate in the study. We advertised our study through email listservs, in classes, and via online forums in a local university to ensure that participants had diverse educational backgrounds. None of them had used the tool before the experiment. Based on the results of our pre-study survey, 50% of the participants thought they had a good sense of space and 76.57% thought they were good at cooperation.

A between-subjects design was used–i.e., one participant could only interact with one position arrangement and one version of the application to avoid any carry-over effects. There were 2 (versions) \(\times\) 3 (positions) groups, that is 30 participants for each version of the tool and 10 participants for each position arrangement in each version (see Table 1). For each version, there are 3 position arrangements, C-C, F-F and S-S (as shown in Fig. 5). The position for each pair was randomly set. The pairs were allowed to discuss freely during the collaboration process.

Table 1 Overall descriptive data for all 60 participants
Fig. 5
figure 5

The three position arrangements based on how participants sat during the experiment. a Side-by-Side (S-S); b Face-to-Face (F-F), and c Corner-to-Corner (C-C)

Apparatus

The application was run on a Google Pixel C tablet with a 2560 * 1080 resolution multi-touch screen. Finger touch was the only input mode as it is the most common way for interacting with these devices. Also, we prepared the tasks printed on paper. During the experiment, each pair sat around one table according to a predefined position arrangement. We used a video camera to capture participants’ interactions with the tool and their verbal and non-verbal communications. The interactions with the tool were also screen-captured for later assessment.

Tasks and procedure

Pre- and post-test tasks

In the pre-test and post-test stages, the task was to test the user’s spatial geometric capability by cutting and augmenting the vertices and edges of three solids (cube, octahedron and tetrahedron). We set up 12 tasks (1 mark for each task; 12 marks in total). According to the solid given in the question, users were asked to select the correct one that can be obtained by cutting or augmenting the vertices and edges of solids from the options. Two users were required to complete the test independently.

Collaborative tasks

In the collaboration stage, the task was still to cut and augment the vertices and edges of the three solids, but the questions were different from that in the pre- and post-test. Several sample tasks are provided (see Fig. 6). During collaboration, each pair was asked to collaboratively get the correct answer, and two users must provide the same answer for each question by discussing together. There were 12 tasks during collaboration (1 mark for each task; 12 marks in total), which were shown on paper.

Fig. 6
figure 6

Some sample tasks used in this study

Procedure

Each session was divided into six phases: (P1) Informing participants of the purpose of the study and the ethics regulations governing it, and then completing the consent form plus a short questionnaire to collect anonymized demographic data (\(\sim 5 \,\text {mins}\)); (P2) Pre-test with 12 knowledge-based questions about the geometric shapes within 10 mins. If users can not finish all tasks within the time limitation, the tasks that they did not complete will be given 0 mark; (P3) Performing 12 predefined collaborative tasks with the tool on the task sheet (\(\sim 30 \,\text {mins}\)); (P4) Completing the engagement questionnaire (\(\sim 5 \,\text {mins}\)); (P5) Performing post-test which was the same as the pre-test (10 mins); and (P6) A brief interview on their experience of the experiment (\(\sim 5 \,\text {mins}\)). The whole experiment took about one hour to complete for each pair. A training session was provided to participants before the collaborative task for them to become familiar with the interface and interactive features. After that, we would not provide any further assistance to participants.

Hypotheses

We tested the following four hypotheses in this experiment:

H.1.

Shared group would perform better than the Non-Shared group on task performance:

Prior research has emphasized the importance of collaboration on supporting learning. We expected that the same would apply to collaborative problem-solving using mobile devices. We anticipated that shared view/control could facilitate thinking and sharing ideas while solving problems. By making one’s interactions continuously visible to each other, it would lead to a more efficient problem-solving and exploration.

H.2.

Side by Side position is most suitable for improving task performance:

Because of the near distance, users can more easily communicate and share their ideas with little movement and they can communicate much more than in other positions, which would improve task performance to some extent.

H.3.

Shared mode would lead to higher scores than Non-Shared one on engagement during collaboration:

With shared view/control, participants could to see the effects of their partner’s actions. As such, shared view/control would facilitate communication and interaction, which could contribute to a higher engagement level. Besides, because of the shared control, users would easily share their ideas without moving, which would avoid interrupting the user’s operation and provide a higher sense of immersion in the collaborative task.

H.4.

Side by Side position would be the most preferred arrangement:

For the position arrangements, we expected that the Side-by-Side position would lead to higher level of engagement during the collaborative task because participants would feel more natural and it would be more convenient to communicate in near distance.

Results

By assessing user’s performance on the pre- and post-test, collaborative tasks, and ratings on engagement and collaboration questions, we were able to quantify users’ knowledge acquisition of the domain and their perceived level of engagement and collaboration. We next present the results of the quantitative measures, and then combined those with the qualitative measures to cross-validate user performance and collaboration patterns. For simplicity, we will use M, SD, and SE to denote mean, standard deviation and standard error values. The significance level is set to 0.05.

Test score and completion time

Test improvement score

As mentioned earlier, in the pre- and post-test, all participants were asked to finish 12 tasks (12 scores in total) within 10 mins. Between the two tests, participants needed to complete some tasks cooperatively under the condition assigned to them. Our results show that all participants, regardless of their groups, experienced different levels of improvement after interacting with the tool. Overall, participants in the Shared version had a higher mean improvement (\(\text {M} = 2.10\), \(\text {SD} = 2.155\), \(\text {SE} = .393\)) than Non-Shared version (\(\text {M} = 1.47\), \(\text {SD} = 2.113\), \(\text {SE} = .386\)). In each version, there were some differences on mean improvement between position arrangements. Overall, C-C led to the highest improvement (\(\text {M} = 2.50\), \(\text {SD} = 1.650\), \(\text {SE} = .522\) for Shared version, \(\text {M} = 2.50\), \(\text {SD} = 1.780\), \(\text {SE} = .563\) for Non-Shared version), and F-F got the lowest improvement (\(\text {M} = 1.40\), \(\text {SD} = 2.836\), \(\text {SE} = .897\) for Shared version, \(\text {M} = 0.70\), \(\text {SD} = 1.418\), \(\text {SE} = .448\) for Non-Shared version), as shown in Fig. 7.

Fig. 7
figure 7

The mean of improvement in scores (between the pre- and post-test) in different conditions

Fig. 8
figure 8

Distribution of the raw data for score improvement in different conditions. (C-C: Corner to Corner; F-F: Face to Face; S-S: Side by Side)

Figure 8 shows the distribution result of the improvement in scores between the pre- and post-test. We can see that regardless of the version, the participants in C-C achieved the best mean improvement, while F-F led to the worst improvement, a two-way ANOVA (Analysis of Variance) test yielded no significant effect of positions (\(\text {F}_{2,54} = 2.371\), \(\text {p} = .103\), \(\eta \text {p}^2 = .081\), observed power \(= .459\)). The mean improvements in all positions for the Shared version were better than those in the Non-Shared version, however, there was no significant effect of versions (\(\text {F}_{1,54} = 1.357\), \(\text {p} = .249\), \(\eta \text {p}^2 = .025\), observed power \(= .208\)) and no significant interaction effect of version \(\times\) position (\(\text {F}_{2,54} = .410\), \(\text {p} = .666\), \(\eta \text {p}^2 = .819\), observed power \(= .113\)) according to a two-way ANOVA test.

Time and score on the collaborative tasks

As stated before, participants working collaboratively had to complete a set of tasks. We observed their behaviors and communication patterns and recorded the scores and the time for completing tasks (12 scores in total). Answers to the questions and the time spent could provide insights on the task efficiency and allow us to assess how they were affected by their positions and interactions with the tool.

Fig. 9
figure 9

The mean time and score for collaborative tasks in different conditions. (C-C: Corner to Corner; F-F: Face to Face; S-S: Side by Side; Ctime: the time for collaboration; Cscore: the score for collaboration)

In Fig. 9, the Shared group generally spent less time (\(\text {M} = 29.800\), \(\text {SD} = 2.704\), \(\text {SE} = .698\)) to complete the tasks but still got a slightly higher mean score (\(\text {M} = 9.800\), \(\text {SD} = 1.373\), \(\text {SE} = .355\)) than Non-Shared one (Time: \(\text {M} = 33.200\), \(\text {SD} = 4.769\), \(\text {SE} = 1.231\); Score: \(\text {M} = 9.133\), \(\text {SD} = 1.356\), \(\text {SE} = .350\)). Overall, S-S participants spent the least amount of time (\(\text {M} = 30.200\), \(\text {SD} = 1.619\), \(\text {SE} = .512\)) and got the highest mean score (\(\text {M} = 10.100\), \(\text {SD} = 1.524\), \(\text {SE} = .482\)). On the other hand, C-C participants needed the most amount of time (\(\text {M} = 33.700\), \(\text {SD} = 6.533\), \(\text {SE} = 2.066\)) and achieved the lowest score (\(\text {M} = 8.800\), \(\text {SD} = 1.398\), \(\text {SE}= .442\)).

A two-way ANOVA test yielded a significant difference on completion time among versions (\(\text {F}_{1,24} = 8.128\), \(\text {p} = .009\), \(\eta \text {p}^2 = .253\), observed power \(= .781\)) and positions (\(\text {F}_{2,24} = 3.441\), \(\text {p} = .049\), \(\eta \text {p}^2 = .223\), observed power \(= .589\)). There was also statistically significant interaction effect between version and position on time spent (\(\text {F}_{2,24} =4.284\), \(\text {p} = .026\), \(\eta \text {p}^2 = .263\), observed power \(= .691\)). Overall, the Shared version was better than the Non-Shared one on task efficiency. A further Tukey HSD (Honestly Significantly Different) post hoc test showed that S-S obtained the best performance (least amount of time and highest scores) on the tasks than the another two (see Table 2). Therefore, S-S position was the best group on task efficiency and C-C position was the lowest condition.

Table 2 Turkey HSD’s test results for the difference between mean task performances

The average scores by the Shared and Non-Shared pairs were very similar. Although there were some differences among the mean scores in different conditions, there was no statistical significance on the scores among the different versions (\(\text {p} = .178\), observed power \(= .265\)) and positions (\(\text {p} = .108\), observed power \(= .443\)), as indicated by the results of the two-way ANOVA test. Similarly, no statistically significant interaction effect was found between version and position (\(\text {p} = .559\), observed power \(= .138\)).

Perceived engagement levels during collaboration

Overall engagement ratings

The engagement level effectively reveals learners’ persistence, academic achievement and satisfaction with their learning [6, 80,81,82,83,84]. A 7-point Likert scale (from level 1 to 7) user engagement questionnaire was used to measure participants’ engagement levels based on 7 categories (via self-reporting) [85]. The questionnaire for assessing collaboration experience was derived from the work of Isenberg et al. [24] and Jakobsen and Hornbæk [25]. After making some adaptations, our final questionnaire had 23 questions on engagement level. Videos were also used to record participants’ behavior. These types of data would help us assess how engaged the participants were during the analytical process.

The Shared group obtained a slightly higher overall score for engagement (\(\text {M} = 41.817\), \(\text {SD} = 5.862\), \(\text {SE} = 1.070\)) than the Non-Shared group (\(\text {M} = 41.256\), \(\text {SD} = 5.503\), \(\text {SE} = 1.005\)). It was also noteworthy to mention that S-S, regardless of the version used, got the highest engagement scores (\(\text {M} = 42.025\), \(\text {SD} = 6.012\), \(\text {SE} = 1.344\)). Nevertheless, a two-way ANOVA test revealed no statistical significance on the overall engagement scores among the two versions (\(\text {p} = .709\), observed power \(= .066\)) and three positions (\(\text {p} = .831\), observed power \(= .077\)). No statistically significant interaction effect was found between version and position on engagement either (\(\text {p} = .466\), observed power \(= .175\)).

Ratings on the engagement categories

Figure 10 shows the results based on the 7 categories of engagement for the two different versions of the tool. Overall, Shared version (\(\text {M} = 5.974\), \(\text {SD} = .837\), \(\text {SE} = .153\)) received a slightly higher rating than Non-Shared one (\(\text {M} = 5.894, \text {SD} = .786, \text {SE} = .144\)) on the mean engagement level. We can also see that the range was smaller. The Shared version was rated higher on Exploration, Attention, Satisfaction, Comfort, and Communication. It could be inferred that participants felt more involved in the exploration process. However, it seemed that participants in the Non-Shared group generally thought that they contributed more.

Fig. 10
figure 10

The original data of the two different versions on 7 categories of engagement

Fig. 11
figure 11

The original data for the three different positions on 7 categories of engagement

From Fig. 11, we can see that the 3 position arrangements received good scores on almost all categories ranging between 5 and 7. When compared with other categories, Contribution got the lowest scores in all positions. We can also see that the engagement ratings by users of S-S had a larger range for almost all the 7 categories and had higher median. Overall, the results show that S-S received a higher mean engagement (\(\text {M} = 42.025, \text {SD} = 6.012, \text {SE} = 1.344\)).

Although there were some differences on mean of engagement level (overall and sub-scales) among different versions and positions, we did not found any statistically significant differences among 2 \(\times\) 3 conditions (all \(\text {p} > 0.5\)), as indicated by the results of the two-way ANOVA test.

Subjective feedback

Position preference

In this study, participants had to complete tasks at a predefined position arrangement. After the experiment, we also collected their preferences on different positions if they could choose a position freely. The results showed that S-S (41.94%) was the most popular choice; F-F received a slightly lower proportion (37.09%); the lowest was C-C (20.97%).

Collaboration experience

Our questionnaire also had some questions about their collaboration experience. 64.51% thought the relationship among partners would affect the learning performance in collaboration. 64.52% said they were interested in this collaborative mode. During the experiment, one user mentioned this study was really interesting many times.

In addition, we asked some questions on issues of territoriality. Most users thought private and public space were both important when collaborating. Although a shared space is essential for collaboration, some private area or space is still needed to allow them to have independent thinking and exploration.

Discussion

Shared and Non-Shared control and view

We found that the average improvement from pre- to post-test of Shared participants was higher than the Non-Shared group, which aligns with H.1. This shows that Shared type of collaborative mode with mobile devices may be more beneficial for learning. However, although the Non-Shared group did not improve a lot, users performed well in completing the tasks correctly both in Shared and Non-Shared versions.

When collaborating, participants in the Shared group had been able to complete learning tasks in a shorter time but still gained more understanding than the Non-Shared group. From this result, we can conclude that sharing visualizations of tasks among users can help them to share their perspectives in time with their collaborators, which would also contribute to improvements in task efficiency. They can communicate with each other by sharing ideas even if they are not too close. Otherwise, users have to position their mobile devices from their personal territory to group territories by rotating or tilting the devices to improve readability or share/highlight contents, which would lead to time waste and affect the smooth communication between users to some extent. This may be the reason why most users preferred Shared mode during collaborative learning.

Besides, participants provided high scores on their engagement level. It seems that a collaborative setting made users feel engaged. Participants in the Shared mode provided a slightly higher scores. In this sense, these participants considered shared interaction more focused and involving. However, the ANOVA test yielded no significant difference between Shared and Non-Shared versions, which contradicts our H.3. One possibility is that our experiment setting is around a table, which made the distance between paired users very close. Therefore, the users in any position could possibly see their partners’ device when they move the tablet, and users can show theirs ideas and activities to their partner, which made user engagement levels similar across different modes.

A slightly higher score from the Shared group on Communication, Exploration, Satisfaction and Comfort could imply that having a partner would ease the mental burden for participants to complete the tasks. They may feel more comfortable and supported. However, their lower rating for Contribution gave some insights to the potential drawbacks of shared view when using tablets. When two participants have separate analytical and thinking orientations (e.g., one participant would like to morph the shape while the other would like to rotate the current shape), it can lead to some attention anxiety, which may negatively affect the learning and collaboration process.

Position arrangements

Participants placed at C-C and S-S positions got higher improvements of test scores, which partially aligns with H.2. This means that these position arrangements may be suitable for collaborative learning tasks.

It is also worth mentioning that participants in the S-S position completed the tasks collaboratively within the shortest time but achieved the highest scores compared to the other two positions. Therefore, S-S seems the most suitable in this kind of learning scenarios. Normally, users placed at S-S positions were closer in distance, which made it more convenient for them to share their ideas. This also led to a higher task efficiency. In terms of the two versions, S-S in the Shared version may be the most suitable for collaborative learning. In addition, because the ANOVA test yielded a significant effect of Version and Position on completion time, it is perhaps important to consider both the mode and position of users when completing collaborative tasks. In the research of collaboration around a tabletop display [69], the ‘face to face’ position had better performance in communication than ‘side by side’ arrangement, which is different from our results, S-S spent less time on collaborative tasks than F-F position.

The three position arrangements received relatively high scores on all 7 categories. Overall, S-S had the best ratings on engagement and this shows that this position had a higher acceptance level, which aligns with H.4. However, we did not find any significant difference among all positions on user engagement, although we expected users would prefer the S-S position more. Similarly, we think that the reason was still close proximity between participants regardless of their position arrangements. The distance between users in any position was close and similar, which may have caused user engagement levels similar across different positions.

However, in terms of our participants’ subjective preference on the position arrangement, the results show that S-S was the most popular choice; F-F received a slightly lower proportion; and C-C was the least preferred. Most participants wanted to be in the S-S position if they had a choice. Prior research has shown that the side arrangement is also common for collaborative interaction around a tabletop [34]. This result may be caused by that S-S position is more normal behavior when two or more people are together, therefore, S-S had a higher subjective acceptance level among users.

Collaboration patterns and behaviors

For the Shared condition, we observed that one of the reasons for collaborators to speak more was that they tended to explain their actions to their partners or to inform their partners of their intentions. Although they actively shared ideas with each other, they also interacted with their own devices most of the time. This suggests that these participants want to have some control of their interaction. For example, these two comments show this: “Collaborating is fun, but sometimes I have to wait for my partner to finish...” and “It could be useful if I can choose to share my interactions or not”.

For the Non-Shared pairs, we observed that their communication has not been as much or as fluid as the Shared pairs. It was frequent for them to see the manipulations performed by the partner directly on the partner’s tablet when they communicated with each other. We often saw them re-orienting and tilting their own devices towards their partner to facilitate information and idea sharing. Sometimes they directly manipulated the visual objects on their partners’ devices to demonstrate their thinking and reasoning process. However, it is worth mentioning that participants in the Non-Shared condition felt that they had contributed more to solving the tasks. That is, they might have tended to work as single individuals, rather than collaboratively, and because of this they felt that they had contributed more to the problem-solving process. This also aligns with the answers from the engagement questions in that the individuals in the shared group felt that they made less overall contribution.

We also observed the users behavior/patterns during collaboration. These positions allowed different levels of visibility to the screen of their partners, which could have affected the efficiency of grounding in communication. As we expected, when collaborators were placed at S-S position, they worked more closely together. For example, based on the qualitative analysis, participants in S-S position communicated with and demonstrated their interactions to their partners more frequently. In S-S position, because of the near distance, even in Shared mode, users preferred to show personal device to the partner for sharing their ideas. When they were in F-F and C-C positions, they had more eye contact. However, in these two positions, it is not convenient for users to see the device of the partner, they had to move the tablet to show the operations in Non-Shared mode.

Implications of our findings

Based on our results, we are able to distil these three design implications for deploying collaborative visual analytic systems in a co-located cross-device setting.

  • If the goal is to maximize learning within a limited time, shared interaction/view and both C-C and S-S positions could be chosen.

  • If the goal is to maximize task efficiency in completing a set of tasks cooperatively, S-S position with shared view/control could be considered.

  • If there are multiple dynamically linked visualizations, it is useful to provide a function for users to switch between the shared and non-shared modes.

Limitations and future work

There are several limitations in this research. Although the size of our sample population is line with experiments of similar nature [44], it may be useful to see if a larger sample would lead to similar or different results. Another limitation was that, instead of controlling strictly the time during collaboration, we allowed participants to finish all tasks without time limitation. Although the time they spent among different conditions were similar, it may also affect their learning performance. Also, our study has focused on a co-located synchronous collaborative setting and as such our findings may not apply to situations where users need to perform visual analytics tasks in a remote or asynchronous manner. Additionally, as with other similar studies, our experiment was conducted in a lab setting. Given the potential use of the tool in educational settings, it will be useful to have the experiment done in a natural setting (such as a classroom). This would allow us to explore other factors that may affect the use of the tool (like the influence of a teacher and integration of related class content). Groupware technology is developing further, and it will be useful to explore and do comparative assessments with other emergent technologies like augmented reality [86] and virtual reality [87, 88]. Virtual reality technologies are becoming more and more accessible. Visual analytics tasks may happen anytime, anywhere collaboratively without space and time constraints. Our next step is to conduct a similar study in immersive virtual reality environment, in which users will be non-co-located physically, but virtually co-present in a shared virtual workspace. We would also like to see how we can integrate AR with tablets to see how this coupling could benefit collaborative interaction with visual representations and extrapolate design principles for an AR-tablet integrated environment. For example, with virtual reality, we can explore the design of virtual shared/non-shared spaces, collaboration patterns, and new gesture based interactions with 3D objects [89,90,91,92]. Finally, our experiment involved pairs of participants. It will be interesting to explore if the patterns found in our study remain the same for 3 or even more users working together in co-located collaborative visual tasks.

Conclusion

This paper reports the results of an experiment that examines 2 \(\times\) 3 groups of paired users who are required to work together to complete a set of analytical tasks using mobile tablets. We used a visualization tool that supports the exploration of the properties of 3D shapes as the testbed for our study, which investigated the influence that two factors, shared/non-shared control/view and position arrangements, have on users’ knowledge acquisition, engagement level, and learning efficiency. Our results show that paired users with a shared view have better performance when completing the set of tasks collaboratively. Shared view and control could also improve the learning and task efficiency. Additionally, the results show that the side-by-side (S-S) position leads to less time spent on collaborative tasks and greater perceived user preference. Overall, results show that the shared version used in the S-S position is suitable for supporting collaborative cross-tablet explorations of visual content.

Data Availability Statement

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

C-C:

Corner-to-Corner

F-F:

Face-to-Face

S-S:

Side-by-Side

STM:

Solid transition maps

References

  1. Isenberg P, Zuk T, Collins C, Carpendale S (2008) Grounded evaluation of information visualizations. In: Proceedings of the 2008 Workshop on BEyond time and errors: novel evaluation methods for Information Visualization (BELIV’08), Florence, Italy, 5 April 2008

  2. Isenberg Petra, Elmqvist Niklas, Scholtz Jean, Cernea Daniel, Ma Kwan Liu, Hagen Hans (2011) Collaborative visualization: definition, challenges, and research agenda. Inf Vis 10(4):310–326

    Article  Google Scholar 

  3. Soares Anderson Gregório Marques, Dos Santos Carlos Gustavo Resque, De Paula Mendonça Sandro, Carneiro Nikolas Jorge Santiago, Miranda Brunelli Pinto, De Araújo Tiago Davi Oliveira, De Freitas Alexandre Abreu , De Morais Jefferson Magalhães, Meiguins Bianchi Serique (2016) A review of ways and strategies on how to collaborate in information visualization applications. In: Paper presented at the 20th International Conference Information Visualisation (IV), Lisbon, Portugal, 19–22 July 2016

  4. Alallah F, Jin D, Irani P (2010) OA-graphs: orientation agnostic graphs for improving the legibility of charts on horizontal displays. In: Paper presented at the ACM international conference on interactive tabletops and surfaces, ITS 2010, Saarbrücken, Germany, 7–10 November 2010

  5. Alvarez C, Brown C, Nussbaum M (2011) Comparative study of netbooks and tablet PCs for fostering face-to-face collaborative learning. Comput Hum Behav 27(2):834–844

    Article  Google Scholar 

  6. Haber J, Nacenta MA, Carpendale S (2014) Paper vs. tablets: the effect of document media in co-located collaborative work. In: Proceedings of the 2014 international working conference on advanced visual interfaces, Politecnico di Milano, Milan, Italy, 27–29 May 2014

  7. Peter Hamilton, Daniel J. Wigdor (2014) Conductor: Enabling and Understanding Cross-Device Interaction. In: Proceedings of the SIGCHI conference on human factors in computing systems, Toronto Ontario, Canada, April 2014

  8. Hinckley Ken (2003) Synchronous gestures for multiple persons and computers. In: Proceedings of the 16th annual ACM symposium on User interface software and technology (UIST’03), Vancouver, Canada, 2–5 November 2003

  9. Marquardt Nicolai, Hinckley Ken, Greenberg Saul (2012) Cross-device interaction via micro-mobility and f-formations. In: Proceedings of the 25th annual ACM symposium on user interface software and technology (UIST’12), Cambridge, MA, USA, 7–10 October 2012

  10. Rädle R, Jetter H-C, Marquardt N, Reiterer H, Rogers Y (2014) Huddlelamp: Spatially-Aware mobile displays for ad-hoc around-the-table collaboration. In: Proceedings of the ninth ACM international conference on interactive tabletops and surfaces (ITS’14), Dresden, Germany, 16–19 November 2014

  11. Wozniak Pawel, Goyal Nitesh, Kucharski Przemys, Lischke Lars, Mayer Sven, Fjeld Morten (2016) RAMPARTS: Supporting Sensemaking with Spatially-Aware Mobile Interactions. In: Proceedings of the 2016 CHI conference on human factors in computing systems, San Jose, CA, USA, 7–12 May 2016

  12. Anderson R, Chung O, Davis KM, Davis P, Prince C, Razmov V, Simon B (2006) Classroom presenter-A classroom interaction system for active and collaborative learning. The impact of tablet PCs and pen-based technology on education, 5 pages

  13. Benford Steve (2001) Collaborative virtual environments. Commun ACM 44(7):79–85

    Article  Google Scholar 

  14. Goodwin Kristy (2012) Use of tablet technology in the classroom. NSW Department of Education and Communities 2012

  15. Koile Kimberle, Singer David (2006) Development of a tablet-PC-based system to increase instructor-student classroom interactions and student learning. The impact of tablet PCs and pen-based technology on education: vignettes, evaluations, and future directions. pp 115–122

  16. Tutty Jodi, White Barbara (2006) Tablet classroom interactions. In: Proceedings of the 8th Australasian conference on computing education, Hobart, Australia, 2016

  17. Garcia-Sanjuan F, Jurdi S, Jaen J, Nacher V (2018) Evaluating a tactile and a tangible multi-tablet gamified quiz system for collaborative learning in primary education. Comput Educ 123:65–84

    Article  Google Scholar 

  18. Liu CC, Kao LC (2007) Do handheld devices facilitate face-to-face collaboration? Handheld devices with large shared display groupware to facilitate group interactions. J Comput Assist Learn 23(4):285–299

    Article  Google Scholar 

  19. Luyten Kris, Verpoorten Kristof, Coninx Karin (2007) Ad-hoc co-located collaborative work with mobile devices. In: Proceedings of the 9th international conference on Human computer interaction with mobile devices and services (MobileHCI’07), Singapore, 9–12 September 2007

  20. Vanhulst P, Evequoz F, Tuor R, Lalanne D (2018) A descriptive attribute-based framework for annotations in data visualization. Int Joint Conf Comput Vis Imaging Comput Graph 2018:143–166

    Google Scholar 

  21. Wu D, Liu B, Yang Q, Wang R (2020) Social-aware cooperative caching mechanism in mobile social networks. J Netw Comput Appl 149:102457

    Article  Google Scholar 

  22. Mandryk RL, Scott SD, Inkpen KM (2002) Display factors influencing co-located collaboration. Conf Suppl ACM CSCW 2:10

    Google Scholar 

  23. Inkpen K, Hawkey K, Kellar M, Mandryk R, Parker K, Reilly D, Whalen T (2005) Exploring display factors that influence co-located collaboration: angle, size, number, and user arrangement. In: Proceedings of HCI international, Las Vegas, Nevada, USA, 22–27 July 2005

  24. Isenberg Petra, Fisher Danyel, Paul Sharoda A, Morris Meredith Ringel, Inkpen Kori, Czerwinski Mary (2011) Co-located collaborative visual analytics around a tabletop display. IEEE Trans Vis Comput Graph 18(5):689–702

    Article  Google Scholar 

  25. Jakobsen Mikkel R, Hornbæk Kasper (2014) Up close and personal: collaborative work on a high-resolution multitouch wall display. ACM Trans Comput-Hum Interact (TOCHI) 21(2):1–34

    Article  Google Scholar 

  26. Tang Anthony, Tory Melanie, Po Barry, Neumann Petra, Carpendale Sheelagh (2006) Collaborative coupling over tabletop displays. In: Proceedings of the SIGCHI conference on Human Factors in computing systems (CHI’06), Montréal Québec, Canada, April 2006

  27. Billinghurst M, Cordeil M, Bezerianos A, Margolis T (2018) Collaborative immersive analytics. Immersive analytics, Springer 2018:221–257

  28. Leftheriotis I, Chorianopoulos K, Jaccheri L (2016) Design and implement chords and personal windows for multi-user collaboration on a large multi-touch vertical display. Hum-Cent Comput Inf Sci 6(1):1–19

    Article  Google Scholar 

  29. Langner R, Kister U, Dachselt R (2018) Multiple coordinated views at large displays for multiple users: empirical findings on user behavior, movements, and distances. IEEE Trans Vis Comput Graph 25(1):608–618

    Article  Google Scholar 

  30. Brudy Frederik, Budiman Joshua Kevin, Houben Steven, Marquardt Nicolai (2018) Investigating the Role of an Overview Device in Multi-Device Collaboration. In: Proceedings of the 2018 CHI conference on human factors in computing systems (CHI ’18), Montréal, Canada, 21–26 April 2018

  31. Konkel MK, Ullmer B, Shaer O, Mazalek A, Branton C (2020) Toward tangibles and display-rich interfaces for co-located and distributed genomics collaborations. Pers Ubiquit Comput 1–13

  32. McGrath W, Bowman B, McCallum D, Hincapié-Ramos JD, Elmqvist N, Irani P (2012) Branch-explore-merge: facilitating real-time revision control in collaborative visual exploration. In: Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces, Cambridge Massachusetts, USA, November 2012

  33. Seifert Julian, Simeone Adalberto, Schmidt Dominik, Holleis Paul, Reinartz Christian, Wagner Matthias, Gellersen Hans, Rukzio Enrico (2012) MobiSurf: improving co-located collaboration through integrating mobile devices and interactive surfaces. In: Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces (ITS’12), Cambridge Massachusetts, USA, November 2012

  34. Zagermann Johannes, Pfeil Ulrike, Rädle Roman, Jetter Hans-Christian, Klokmose Clemens, Reiterer Harald (2016) When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets. In: Proceedings of the 2016 CHI conference on human factors in computing systems (CHI’16), San Jose, CA, USA, 7–12 May 2016

  35. Selwyn-Smith B, Anslow C, Homer M, Wallace JR (2019) Co-located Collaborative Block-Based Programming. In: Paper presented at the 2019 IEEE symposium on visual languages and human-centric computing (VL/HCC), Memphis, TN, USA, 15–17 October 2019

  36. Wallace James R, Scott Stacey D, MacGregor Carolyn G (2013) Collaborative sensemaking on a digital tabletop and personal tablets. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI’13), Paris, France, 27 April-2 May 2013

  37. Jeong DH, Ji SY, Suma EA, Yu B, Chang R (2015) Designing a collaborative visual analytics system to support users’ continuous analytical processes. Hum-Cent Comput Inf Sci 5(1):1–20

    Article  Google Scholar 

  38. Konkel MK, Ullmer B, Shaer O, Mazalek A (2019) Envisioning tangibles and display-rich interfaces for co-located and distributed genomics collaborations. In: Proceedings of the 8th ACM international symposium on pervasive displays, Palermo, Italy, 12–14 June 2019

  39. Bishop F, Zagermann J, Pfeil U, Sanderson G, Reiterer H, Hinrichs U (2019) Construct-A-Vis: exploring the free-form visualization processes of children. IEEE Trans Vis Comput Graph 26(1):451–460

    Google Scholar 

  40. Heer Jeffrey, Viégas Fernanda B, Wattenberg Martin (2009) Voyagers and voyeurs: supporting asynchronous collaborative visualization. Commun ACM 52(1):87–97

    Article  Google Scholar 

  41. Tobiasz Matthew, Isenberg Petra, Carpendale Sheelagh (2009) Lark: coordinating co-located collaboration with information visualization. IEEE Trans Vis Comput Graph 15(6):1065–1072

    Article  Google Scholar 

  42. Desolda G, Ardito C, Jetter HC, Lanzilotti R (2019) Exploring spatially-aware cross-device interaction techniques for mobile collaborative sensemaking. Int J Hum-Comput Stud 122:1–20

    Article  Google Scholar 

  43. Legg P, Smith J, Downing A (2019) Visual analytics for collaborative human-machine confidence in human-centric active learning tasks. Hum-Cent Comput Inf Sci 9(1):5

    Article  Google Scholar 

  44. Chung Haeyong, Yang Seungwon, Massjouni Naveed, Andrews Christopher, Kanna Rahul, North Chris (2010) VizCept: Supporting synchronous collaboration for constructing visualizations in intelligence analysis. In: Paper presented at the 2010 IEEE symposium on visual analytics science and technology, Salt Lake City, Utah, USA, 24–29 October 2010

  45. Isenberg Petra, Fisher Danyel (2009) Collaborative brushing and linking for Co-located visual analytics of document collections. Comput Graph Forum 28(3):1031–1038

    Article  Google Scholar 

  46. Hemmings M, Krahn R, Lary D, McGeer R, Ricart G, Röder M (2016) The ignite distributed collaborative scientific visualization system. The GENI Book. Springer, Berlin, pp 451–477

  47. Balakrishnan Aruna D, Fussell Susan R, Kiesler Sara (2008) Do visualizations improve synchronous remote collaboration? In: Proceeding of the 26th annual CHI conference on Human factors in computing systems, Florence, Italy, 5–10 April 2008

  48. Chuah Mei C, Roth Steven F (2003) Visualizing common ground. In: Proceedings of the international conference on information visualization, London, UK, 16–18 July 2003

  49. Moraes Alessandro C, Eler Danilo M, Brega Jose RF (2014) Collaborative Information Visualization Using a Multi-projection System and Mobile Devices. In: Paper presented at the 18th international conference on information visualisation, Paris, France, 16–18 July 2014

  50. Ens B, Lanir J, Tang A, Bateman S, Lee G, Piumsomboon T, Billinghurst M (2019) Revisiting collaboration through mixed reality: the evolution of groupware. Int J Hum-Comput Stud 131:81–98

    Article  Google Scholar 

  51. Greenberg Saul (1990) Sharing views and interactions with single-user applications. ACM SIGOIS Bull 11(2–3):227–237

    Article  Google Scholar 

  52. Papangelis K, Chamberlain A, Lykourentzou I, Khan VJ, Saker M, Liang HN, SADIEN I, Cao T (2020) Performing the digital self: understanding location-based social networking, territory, space, and identity in the City. ACM Trans Comput-Hum Interact 27(1):1–26

    Article  Google Scholar 

  53. Papangelis K, Metzger M, Sheng Y, Liang HN, Chamberlain A, Khan VJ (2017) “Get Off My Lawn!” Starting to Understand Territoriality in Location Based Mobile Games. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems, Denver Colorado, USA, May 2017

  54. Larsen-Ledet I, Korsgaard H (2019) Territorial functioning in collaborative writing. Comput Support Cooper Work (CSCW) 28(3–4):391–433

    Article  Google Scholar 

  55. Scott SD, Carpendale MST, Inkpen KM (2004) Territoriality in collaborative tabletop workspaces. In: Proceedings of the 2004 ACM conference on computer supported cooperative work, Baltimore, Maryland, USA, 15–19 February 2004

  56. Tse E, Histon J, Scott SD, Greenberg S (2004) Avoiding Interference: How People Use Spatial Separation and Partitioning in SDG Workspaces. In: Proceedings of the 2004 ACM conference on computer supported cooperative work, Baltimore, Maryland, USA, 15–19 February 2004

  57. Albertos-Marco F, Penichet VM, Gallud JA (2019) Foundations for distributing the interaction: facilitating universal access through Responsive Cross-Device Applications. Univ Access Inf Soc 18(4):821–835

    Article  Google Scholar 

  58. Xu Y, Wang L, Xu Y, Qiu S, Xu M, Meng X (2019) Cross-device task interaction framework between the smart watch and the smart phone. Personal and Ubiquitous Computing 1–11

  59. Forlines Clifton, Lilien Ryan (2008) Adapting a Single-User, Single-Display Molecular Visualization Application for Use in a Multi-User, Multi-Display Environment. In: Proceedings of the working conference on advanced visual interfaces (AVI’08), Napoli, Italy, 28–30 May 2008

  60. Liu Can, Chapuis Olivier, Beaudouin-Lafon Michel, Lecolinet Eric (2016) Shared Interaction on a Wall-Sized Display in a Data Manipulation Task. In: Proceedings of the 2016 CHI conference on human factors in computing systems (CHI’16), San Jose, CA, USA, 7–12 May 2016

  61. Hinckley Ken (2003) Distributed and Local Sensing Techniques for Face-to-Face Collaboration. In: Proceedings of the 5th international conference on Multimodal interfaces, Vancouver, British Columbia, Canada, 5–7 November 2003

  62. Kaufmann Hannes, Schmalstieg Dieter (2002) AR008\_Mathematics and geometry education with collaborative augmented reality. In: Paper presented at the ACM SIGGRAPH 2002 conference abstracts and applications, San Antonio, Texas, USA, 21–26 July 2002

  63. Huang Z, Zhu Y, Mao X, Su T, Fu X, Fei G (2020) coisTable: An Individual-and-Spatial-Aware Tabletop System for Co-located Collaboration. In: Paper presented at the 6th international conference on dependable systems and their applications (DSA), Harbin, China, 3–6 January 2020

  64. Chung H, North C (2018) SAViL: cross-display visual links for sensemaking in display ecologies. Pers Ubiquit Comput 22(2):409–431

    Article  Google Scholar 

  65. Kraut Robert E, Gergle Darren, Fussell Susan R (2002) The Use of Visual Information in Shared Visual Spaces - Informing the Development of Virtual Co-Presence. In: Proceedings of the international conference on computer supported cooperative work (CSCW’02), New Orleans, Louisiana, USA, 16–20 November 2002

  66. Thomas P, Hans-christian J, Roman R, Nylandsted KC, Thomas L, Harald R (2017) Is Two Enough?! Studying Benefits, Barriers, and Biases of Multi-Tablet Use for Collaborative Visualization. In: Proceedings of the 2017 CHI conference on human factors in computing systems (CHI’17), Denver, CO, USA, 06–11 May 2017

  67. Georgiou Yiannis, Kyza Eleni A (2020) Bridging narrative and locality in mobile-based augmented reality educational activities: effects of semantic coupling on students’ immersion and learning gains. Int J Hum Comput Stud 2020:102546

    Google Scholar 

  68. Sommer R (1969) Personal space: the behavioural basis of design. Prentice-Hall, Upper Saddle-River

    Google Scholar 

  69. Ha V, Inkpen KM, Mandryk RL, Whalen T (2006) Direct intentions: The effects of input devices on collaboration around a tabletop display. In: Paper presented at the first IEEE international workshop on horizontal interactive human-computer systems (TABLETOP’06), Adelaide, Australia, 5–7 January 2006

  70. Rodden T, Rogers Y, Halloran J, Taylor I (2003) Designing Novel Interactional Workspaces to Support Face to Face Consultations. In: Proceedings of the ACM CHI 2003 human factors in computing systems conference, Ft. Lauderdale, Florida, USA, 5–10 April 2003

  71. Liang Hai Ning, Sedig Kamran (2010) Can interactive visualization tools engage and support pre-university students in exploring non-trivial mathematical concepts? Comput Educat 54(4):972–991

    Article  Google Scholar 

  72. Luff Paul, Heath Christian (1998) Mobility in collaboration. In: Proceedings of the 1998 ACM conference on computer supported cooperative work (CSCW’98), Seattle, WA, USA, 14–18 November 1998

  73. Liaqat A, Munteanu C (2019) Social Learning Frameworks for Analyzing Collaboration with Marginalized Learners. In: Paper presented at the conference companion publication of the 2019 on computer supported cooperative work and social computing, Austin, Texas, 9–13 November 2019

  74. Phelps Erin, Damon William (1989) Problem solving with equals: peer collaboration as a context for learning mathematics and spatial concepts. J Educ Psychol 81(4):639–646

    Article  Google Scholar 

  75. Mintz R, Litvak S, Yair Y (2001) 3D-virtual reality in science education: an implication for astronomy teaching. J Comput Math Sci Teach 20(3):293–305

    Google Scholar 

  76. Sedig Kamran, Rowhani Sonja, Liang Hai Ning (2005) Designing interfaces that support formation of cognitive maps of transitional processes: An empirical study. Interact Comput 17(4):419–452

    Article  Google Scholar 

  77. Liang HN, Sedig K (2010) Role of interaction in enhancing the epistemic utility of 3D mathematical visualizations. Int J Comput Math Learn 15(3):191–224

    Article  Google Scholar 

  78. Roberts Jonathan C (2007) State of the Art: Coordinated & Multiple Views in Exploratory Visualization. In: Paper presented at the fifth international conference on coordinated and multiple views in exploratory visualization (CMV’07), Zurich, Switzerland, July 2007

  79. Ballas James A, Heitmeyer Constance L, Pérez Manuel A (1992) Evaluating two aspects of direct manipulation in advanced cockpits. In: Proceedings of the SIGCHI conference on Human factors in computing systems, Monterey, California, 3–7 June 1992

  80. Carini Robert M, Kuh George D, Klein Stephen P (2006) Student engagement and student learning: testing the linkages. Res High Educ 47(1):1–32

    Article  Google Scholar 

  81. SEQ Student engagement questionnaire. https://www.surveymonkey.com/r/B53PZH8. Accessed 07 July 2019

  82. Hutchins EL, Hollan JD, Norman DA (1985) Direct manipulation interfaces. Hum Comput Interact 1(4):311–338

    Article  Google Scholar 

  83. Sedig Kamran, Klawe Maria, Westrom Marvin (2001) Role of interface manipulation style and scaffolding on cognition and concept learning in learnware. ACM Trans Comput Hum Interact 8(1):34–59

    Article  Google Scholar 

  84. Christenson Sandra L, Reschly Amy L, Wylie Cathy (eds) (2012) Handbook of research on student engagement. Springer, New York

    Google Scholar 

  85. Hung Ya-hsin, Parsons Paul (2017) Assessing User Engagement in Information Visualization. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems (CHI EA’17), Denver, CO, USA, 06–11 May 2017

  86. Ens Barrett, Lanir Joel, Tang Anthony, Bateman Scott, Lee Gun, Piumsomboon Thammathip, Billinghurst Mark (2019) Revisiting collaboration through mixed reality: The evolution of groupware. Int J Hum Comput Stud 131:81–98

    Article  Google Scholar 

  87. Lu F, Yu D, Liang HN, Chen W, Papangelis K, Ali NM (2018) Evaluating engagement level and analytical support of interactive visualizations in virtual reality environments. In: Paper presented at 2018 IEEE international symposium on mixed and augmented reality (ISMAR), Darmstadt, Germany, 16–20 October 2018

  88. Martikainen, S., Wikström, V., Falcon, M., Saarikivi, K. (2019) Collaboration Face-to-Face and in Virtual Reality-Empathy, Social Closeness, and Task Load. Paper presented at the conference companion publication of the 2019 on computer supported cooperative work and social computing, Austin, Texas, 9–13 November 2019

  89. Liang HN, Lu F, Shi Y, Nanjappan V, Papangelis K (2019) Evaluating the effects of collaboration and competition in navigation tasks and spatial knowledge acquisition within virtual reality environments. Future Gener Comput Syst 95:855–866

    Article  Google Scholar 

  90. Wu H, Luo W, Pan N, Nan S, Deng Y, Fu S, Yang L (2019) Understanding freehand gestures: a study of freehand gestural interaction for immersive VR shopping applications. Hum Cent Comput Inf Sci 9(1):43

    Article  Google Scholar 

  91. Nanjappan V, Liang HN, Lu F, Papangelis K, Yue Y, Man KL (2018) User-elicited dual-hand interactions for manipulating 3D objects in virtual reality environments. Hum Cent Comput Inf Sci 8(1):31

    Article  Google Scholar 

  92. Zhang F, Wu TY, Pan JS, Ding G, Li Z (2019) Human motion recognition based on SVM in VR art media interaction environment. Hum Cent Comput Inf Sci 9(1):40

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the participants for their time and the reviewers for their comments and suggestions that have helped improve the paper.

Funding

This research was funded in part by Xi’an Jiaotong-Liverpool University Key Program Special Fund (KSF-A-03), Key Laboratory Platform for VR Technology (RRSP10120170029) and Xi’an Jiaotong-Liverpool University Research Development Fund.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hai-Ning Liang.

Ethics declarations

Conmpeting interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Liang, HN., Lu, F. et al. Collaborative behavior, performance and engagement with visual analytics tasks using mobile devices. Hum. Cent. Comput. Inf. Sci. 10, 47 (2020). https://doi.org/10.1186/s13673-020-00253-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13673-020-00253-7

Keywords