Skip to main content

How does this message make you feel? A study of user perspectives on software update/warning message design


Software update messages are commonly used to inform users about software updates, recent bug fixes, and various system vulnerabilities, and to suggest recommended actions (e.g., updating software). While various design features (e.g., update options, message layout, update message presentation) of these messages can influence the actions taken by users, no prior study can be found that investigated users opinions regarding various design alternatives. To address this void, this paper focuses on identifying software update message design features (e.g., layout, color, content) that may affect users positively or negatively. Toward that, we conducted a user study where users are shown 13 software update messages along with 1 virus warning message. We collect responses from 155 users through an online survey. Participants gave a total of 809 positive comments and 866 negative comments along with ranking of each image in terms of perceived importance, noticeability, annoyance and confusion. As many of the comments are repetitive and often contain multiple themes, we manually analyzed and performed a bottom-up, inductive coding to identify and refine the underlying themes. Over multiple iterations, positive comments were grouped into 52 categories which were subsequently grouped under four themes. Similarly, negative comments were first grouped into 38 categories which were subsequently grouped under four themes. Based on our analysis, we present the list of design features that are found to be highly correlated to confusion, annoyance, noticeability, and importance, either positively or negatively.


Keeping computing systems up-to-date with the latest software and security updates [14] is one of the most effective ways to prevent security attacks that often exploit known software vulnerabilities [57]. Unfortunately, as software companies are constantly trying to identify various security vulnerabilities and release fixes promptly, users often ignore recommended updates, exposing their systems to potential cyber-attack. As eliminating human from the system maintenance loop is not an option and/or desirable in many cases, it is critical to understand the underlying reasons behind the widespread reluctance of users in updating their systems. Once identified, design choices can be made when making software update messages to help them effectively communicate the risk of not updating and convince users to take the necessary actions voluntarily.

While recent efforts can be found that studied the complexity of software update message content and pointed out that users are often confused by update messages [8], we argue that message content is only one aspect of software update/warning message design and there are other design factors (e.g., layout, placement, color) that may also significantly affect user’s behavior which have not been studied. Inspired by prior efforts [3, 4] and to address this void, this paper aims to identify design features that may significantly influence the level of confusion, annoyance, noticeability, and perceived importance experienced by users once a software update message is delivered. Our analysis considers several variables which are all operationally defined in our study using survey instruments and self-reported ratings, as explained in “Definition of variables”.

This report presents a study of 155 users using real software update and warning messages. Each participant was shown 14 images in random order and asked to rate on a Likert scale how important, annoying, confusing, and noticeable he/she considered each message, as well as what they “liked” and “didn’t like” about each message in an open-ended format. The messages shown to participants varied in terms of software type/brand, message complexity, font size, background/feature color, button design, and offered features.

Our analysis centers on a bottom-up inductive coding to identify the underlying themes across comments left by participants and qualitative analysis of ratings left by participants for each message. Over multiple iterations, 809 positive comments were initially grouped into 52 codes. Similarly, 866 negative comments were grouped into 38 codes. After further analysis, we identified that comments (both positive and negative) fell under one of the following themes: (1) design/layout, (2) message content, and (3) update mechanism. In addition to these three themes, users also left comments expressing emotions toward updates and software brand in general, which we discuss separately in “General affective-cognitive considerations by participants”.

To summarize, in this paper we make the following key contributions:

  1. 1.

    We analyze and present the composition of themes derived from the coding analysis along with patterns noticed in the comments.

  2. 2.

    To identify the design features that are correlated to message noticeability, perceived importance, level of annoyance, and confusion caused by the message, we analyze the relationship between comment frequency and average participant ratings for each message.

  3. 3.

    We present evidence of the effect of prior experience on current behavior in response to software update and warning messages.

The rest of the paper is organized as follows: “Related work” describes prior research that is related to our work. “Methods” explains the design of our data collection policy and mechanism used in this paper. “Results” presents our Results and relate them back to our paper’s central argument. “Limitations” discuss the limitations of our study along with future directions. Finally, “Conclusion” concludes the paper.

Related work

Our work is motivated by prior work from other domains such as marketing research and communication theory that investigated the effect of various message design features on effective risk communication for consumer products such as poisons, cigarettes, fire extinguishers, cars, and industrial equipment [916]. Specifically, prior investigations looked into the impact of font size, contrasting colors, pictures, and audio on salience and noticeability of warning messages [11, 1619]. The effect of word choice, layout, and placement on message effectiveness and level of perceived hazard have also been studied [15, 20, 21]. Human factors such as personal attitudes, beliefs, past experience, and demographic variables such as age, gender, language, and ethnicity are all shown to impact the effectiveness of warning message [8, 19, 2123].

While a large volume of research exists that investigates the problem of designing effective warning messages in other domains, only a limited number of prior efforts can be found in the domain of software that primarily focus on analyzing the understandability of messages. Among these, one recent work investigated the effect of education level on understandability of computer warning messages, and reported that, on average, a user needs to have at least 10 years of education to understand the presented messages [8]. Another recent work reported that only 17 % paid attention to Android permission settings while installing new mobile application software, and only 3 % showed full comprehension of the messages [24], demonstrating widespread lack of comprehension and attention. Another work studied the affective-cognitive and linguistic aspects of software warning and security messages, and pointed out that the linguistic complexity of messages is often related to the effect of attention, attitudes and beliefs [25]. All of these cases point to deficiencies in the update message design and content since users should be able to easily understand these important messages.

Another team found large differences between actual changes made by updates and the changes the participants thought were being made [4]. We argue that this failure to grab attention and effectively communicate the intended information falls on the software messages and it is important to identify why users do not pay attention to and do not understand the messages.

These issues have prompted some to try and fix or avoid this disconnect between users and software companies. Several have looked into effective notification strategies [3, 26, 27], but these studies mostly focus on how to manage the notification system and not on the design of the message itself. An applicable lesson from these projects is that past experiences are very important to decision making when it comes to software updates [3], as they find that bad experiences with a software’s updates will make users less likely to apply updates for that software in the future.

Though proper message delivery is important to adherence, the design and communication potential of those messages is also important. A notable redesign study altered smartphone application permission warnings by including real-life mobile data security risks (i.e., what information from the phone will be available based on the apps’ permissions) to make users more aware of privacy concerns [28]. The researchers found that the information made users pay more attention to these issues when installing apps, showing how the design of messages can be used to better communicate important information.

Finally, the authors of this paper have previously published a study examining opinions of users about software updates from specific and different software packages [29]. That investigation found that hesitation to apply updates and confusion with messages was common among the sampled users. Additionally, annoyance and confusion with messages proved a predictor for hesitation to apply updates for some of the softwares tested. Finally, when comparing quantitative data gathered while users viewed the same images as in this study, that analysis found a correlation between how annoying and confusing users rated each message as well a correlation between importance and noticeability. This prior publication is of a different focus than this work and uses different analysis methods to make it’s argument (i.e., quantiative analysis). Data for both studies were collected together using the same survey, but this paper analyzes data not reported on in the prior work.

While there have been some recent study into the comprehension of security warning and software update messages to some extent, the authors have found no study that specifically looked into the effect of message design features (e.g., layout, font, button designs) on the level of confusion, annoyance, noticeability, and perceived importance experienced by users once a message is delivered, which is the main focus of our study.


A multi-formatted survey was used to gather the opinions of participants about current software update and warning messages. Participants were shown a series of images, each a screen-shot of an update or warning message. While viewing the image, they were asked to rate their perception of four qualities: importance, annoyance, confusion, and noticeability. The format of the questions asked to respondents can be seen in Table 1. Responses to these questions were given on a 7-point Likert scale. Along with the scaled questions, participants were invited to describe, in an open-answer format what they “liked” and “didn’t like” about the message in the image.

Table 1 Qualitative questions shown with update images

The images used included messages from well-known software companies such as Microsoft and Apple, and also included messages from software companies that are less well-known or popular (e.g., VLC media player). The samples of messages used contains 13 update messages and 1 virus warning message. Although the virus warning message is not an update message, it shares many similar characteristics with update messages: similar size and layout, explicitly mentioning immediate security threats, and asking users to take certain actions. Thus, we argue it’s inclusion in this study is appropriate since the goal is to identify user opinions about software messages that attempt to motivate the user to action, and both types of message (i.e., update and virus warning) fall into this category. Images 1–6 can be seen in Fig. 1, Images 7–12 can be seen in Fig. 2, and Images 13–14 can be seen in Fig. 3.

Fig. 1

Image 1 to Image 6

Fig. 2

Image 7 to Image 12

Fig. 3

Image 13 to Image 14

Fig. 4

Distribution of responses to the question “On average, how many hours per day do you usually spend on the computer?” Responses of values 15 and above are agregated in the category 15(+)

Sampling methodology

The flyer advertising the survey was distributed through university student and community email lists. We gathered 172 sets of responses during a two day period. Approximately 155 of these were complete and were used in our analysis. The study was approved by the IRB and each participant received a $15 (USD) Amazon gift card for participation. To facilitate taking breaks while taking the survey, participants were required to create an account before beginning the survey. This allowed participants to leave in the middle of the survey and log back in later at their convenience. Respondents also provided email addresses for the purpose of compensation. Participation in the survey was completely voluntary.

Definition of variables

Our analysis considers emotions elicited by update/warning messages, which are operationally defined in our study using self-reported ratings that reflect each variable. Emotions evoked by specific messages (i.e., annoyance, confusion, importance, and noticeability) are defined using the questions in Table 1. In this case, the emotion mentioned in the question is the emotion that question is used to measure. We also analyze the update messages included in the study on several aspects. Broadly, they are design and layout, content, and delivery mechanism. Codes were developed to describe each of these categories for every message. Those codes are described in more detail in “Results”.

Table 2 Average ratings for each image


A total of 155 adults participated in the study. All were university students and staff recruited using email-lists that distributed our flyer to those populations. The sample population has a median age of 21 years. The average age is 22 years with a standard deviation of 5.4. 60 % of the respondents is female and 40 % is male. We asked participants to report their average hours-per-day on a computer. Based on a scale from 0 to 24 h, the average response was 6.4 with a standard deviation of 3.6. The median response was 6 h per day. Figure 4 shows the full distribution of participants’ responses to this question.

As described in “Methods”, participants in our study were shown 14 images in series and asked to rate how important, annoying, confusing, and noticeable they thought each message was. The average ranking of each image in these four categories are listed in Table 2.

Table 3 Sample positive comments regarding various aspects of design/layout
Table 4 Frequency of comments regarding positive aspects of design/layout for all 14 sample messages

To identify the design features that may explain the image rankings, participants were also invited to express what they liked and did not like about each message in an open-answer, comment format. To help identify the underlying design features that are correlated to message noticeability, perceived importance, level of annoyance, and confusion caused by the message, we first performed a bottom-up inductive coding to identify the underlying themes across comments left by users. We discuss the results of this coding and tie them to the qualitative rankings collected for each image to help identify design features preferred and disliked by participants. To help bring context to these findings, we also relate each to results from a prior analysis of different data collected in the same survey as the data analyzed here.

Open-answer response coding procedure

Bottom-up inductive coding was used to better understand the wealth of information contained in the comments collected. Participants left a total of 1676 comments, including 809 positive comments and 866 negative comments. Throughout this report, “positive comments” and “negative comments” refers to responses to the prompt for comment regarding what participants “like” and “dislike” about each message, respectively.

The comments were first coded by a member of our lab who was unaffiliated with the design, implementation, and execution of the web-survey. From that process, we have 52 codes to describe the positive comments. The negative comments are described with 38 codes. These codings were then analyzed by the lead researcher and bottom-up inductive coding was used to summarize the initial codes into three broad themes: design/layout, message content, and update mechanism. The resulting process was then reviewed by a third member of our group for validity.

Summary of themes and qualitative codings

Positive and negative comments were coded and analyzed independently, so we organize our evaluation to appropriately separate the two analyses.

Design/layout features preferred by participants

Participants left a wide range of comments regarding various positive aspects of message design/layout such as color, font size, organization of information, button designs, and window size. Sample positive comments that are related to design/layout are listed in Table 3.

Some comments highlight the importance of selecting background color carefully to reduce the level of annoyance experienced by users. One example is:

“The blue background makes it look nicer and calmer. It makes it less annoying. (Image 3)”

Other comments highlight the effect of design symbols and patterns on the perceived level of importance of a message that can help to attract a user’s attention, for example:

“Symbol and yellow/black band gives clear indication this is important. (Image 5)”

Moreover, numerous comments regarding the placement and design of buttons underscore the importance of well-designed buttons on usability, which also help to reduce the level of confusion.

Coding analysis of the positive comments initially identified 18 codes that were later combined because they shared the common theme of design/layout. The frequency of these codes’ assignment to the positive comments left for each image tested can be seen in Table 4. Comparing these values with the average rating for each image seen in Table 2, it is interesting to see that Image 5 received the highest number of positive design/layout comments while also having the highest rating for perceived importance and noticeability. On the other end of the spectrum, the image with the lowest number of positive comments, Image 4 received the highest ratings for annoyance and confusion.

As our previous study found, annoyance and confusion were sometimes related to hesitation with applying software updates [29]. Additionally, it found correlations between annoyance and confusion, and importance and noticeability, based on numerical ratings of each emotion, provided by participants. The above results indicate that good design, as percieved by users can help reduce at least some negative emotions while increase at least some positice ones, particialy those found to be further associated with hesitation to apply updates. Multiple studies, including our own have found annoyance and confusion to be common for users when seeing updates [3, 4, 29]. Better designs could help alleviate this common issue, while also increasing importance and noticeability.

Design/layout features disliked by participants

Design/layout was also a common theme in the negative comments left by participants. Comments identified unfamiliar logo/design, use of boring colors, small font size, limited range of options, and poor button designs. Sample negative comments that are related to design/layout are listed in Table 5.

Table 5 Sample negative comments regarding various aspects of design/layout
Table 6 Frequency of comments regarding negative aspects of design/layout for all 14 sample messages

Some negative comments bring attention to the importance of placing and designing buttons that consider the users’ perspectives. For example:

“I don’t like that the “Ignore” button is so close to the others. “Ignore” should be of a smaller size and require you to verify that you wish ignore the alert. It’s not clear to those technologically-illiterate what they need to do in this scenario. (Image 5) ”

Other comments highlight the negative effect of using hard-to-understand button labels, which may lead to confusion and possibly noncompliance. One such comment is:

“Too many button options. Why would I choose to “quarantine” versus “repair”? What does “reveal in finder”? (Image 5)”

Coding analysis produced 12 codes for negative comments that were combined into the theme of design/layout. Those codes along with their frequency in the comments for all 14 messages tested can be seen in Table 6. Consistent with the pattern found in positive design/layout comments, Image 4, the image ranked highest in annoyance and confusion received the largest number of negative design/layout comments.

Table 7 Sample positive comments regarding various aspects of message content
Table 8 Frequency of comments regarding positive aspects of message content for all 14 sample messages
Table 9 Sample negative comments regarding various aspects of message content

As mentioned for positive comments, the tendency of negative comments about design to show up on messages also ranking high in annoyance and confusion suggests that better designs could help alleviate these concerns. Some participants mentioned the logo of the message in their comments, showing that they were negatively aware of brand while viewing the message. This shows that branding and update origin could have an effect on perception of update messages, a finding echoed by the previous investigation [29].

Message content features preferred by participants

Comments about message content proved another common theme in the qualitative data. Participants noted many times that they appreciated when the update message clearly mentions what software is being updated, uses simple language, explains the reason(s) behind the update, and explains the benefits clearly. Sample positive comments that are related to message content are listed in Table 7.

Table 10 Frequency of comments regarding negative aspects of message content for all 14 sample messages
Table 11 Sample positive comments regarding various aspects of update mechanism
Table 12 Frequency of comments regarding positive aspects of update mechanism for all 14 sample messages

Some participant’s answers bring attention to the importance of word choice in communicating message urgency and/or positive emotions. Two such examples are:

“Tells me it is important and needs to be done now so I do not just x out and have my computer become vulnerable (Image 2)”

“The message was very clear in what the software was going to do. I also liked how it stated “let’s get started”. That just seems a little better in the phrasing to me. (Image 3)”

Other comments speak to the importance of communicating the problem(s) being fixed by the update. For example:

“Said what it is actually fixing. (Image 4)”

From qualitative analysis, 10 codes were combined into the broader theme of message content. Those codes along with the frequency of assignment to positive comments related to each image can be seen in Table 8. As can be seen in Table 8, Image 11 received the highest number of total positive content-related comments. Interestingly, as seen in Table 2, Image 11 also ranks lowest in terms of annoyance and confusion.

Table 13 Sample negative comments regarding various aspects of update mechanism
Table 14 Frequency of comments regarding negative aspects of update mechanism for all 14 sample messages

Many prior studies have found issues with confusion related to update messages, but most focus on confusion about the update’s actions or the importance of the update rather than confusion with the message content exactly [3, 4, 29]. The above results support these prior findings by highlighting that users care about understanding urgency, but also makes the connection between this communication of urgency with the actual content of update messages, as shown in some user comments. Well written messages can not only communicate the importance of update messages, but can also possibly increase other positive emotions for viewers, which could help towards convincing them to apply the update.

Message content features disliked by participants

Participants’ sometimes complained about lack of explanation regarding the risks of not updating, not mentioning the benefits of updating, use of unfamiliar words, and being too technical, all comments connected to message content. Sample comments that are related to message content are listed in Table 9.

Table 15 Correlation between number of comments of each type and quantitative ratings across the 14 images tested
Table 16 Correlations between each images’ ratings for each emotion and the frequency of the indicated positive feature codes being applied to comments about the image
Table 17 Correlations between each images’ ratings for each emotion and the frequency of the indicated negative feature codes being applied to comments about the image

Some participants feel frustration when the update message fails to properly communicate the risks and benefits of an update. For example:

“It doesn’t say anything about the update at all. It doesn’t let you know what’s actually is being done to your computer and what benefit you get from doing the update. (Image 1)”

Meanwhile, other comments highlight a lack of urgency or importance in the minds of participants related to updating. Two such comments are:

“People don’t like to change unless they have to, thus I’m not going to change something that works. And Firefox is clearly working and will continue to work if I do or do not update. (Image 12)”

“Firefox doesn’t need this red box to get users’ attention because it’s just a browser not anti-virus software. No disadvantages to use an old version. No hurts to computers. (Image 2)”

We argue that this disconnect for some participants is not only due to messages not properly communicating the benefits of staying up to date, but are also at least partly due to ignorance on the part of the participants. A lack of urgency exists concurrently with a fear of change, which may be exacerbated by unfamiliarity with the language being used in messages. One comment highlights this effect, in this case the unfamiliar word being “Firmware”:

“I dislike that it sounds like this update is going to make a bunch of changes to my device that I would not like. I also dislike that I don’t know what Firmware is and that it is not clear whether or not I would get charged for the update. (Image 6)”

This stresses the importance of using familiar and accessible language in update messages and the need to design them while keeping non-technical users in mind, which is often not the case.

Ten codes from qualitative analysis of negative comments make up the theme of message content for those comments. The codes and their corresponding frequencies in negative comments received for each image are shown in Table 10.

When looking at negative aspects of message content, confusion was a big concern, specificially with hard to read texts that are too long or too technical for most users. These issues with confusion were also identified in our prior work, but this work shows that for many of our participants, this confusion was at least particially caused by poorly written text, a connection not made in the last publication [29]. Avoiding these negative content features could go a long way towards better update message design.

Update mechanism features preferred by participants

Interestingly, very few participants commented positively about update mechanism. Those that did appreciated when the update message clearly mentioned the inconvenience of updating (e.g., machine restart, time needed to update, pop-up being small and unobtrusive). Sample comments related to update mechanism are listed in Table 11.

Some positive comments about update mechanism show that participants like update messages that do not interrupt ongoing tasks. For example:

“ In the corner, and goes away by itself, I don’t have to click on anything. (Image 14)”

However, such message may also suffer from poor noticeability and low perceived importance. For example, Image 14 was ranked lowest in terms of perceived importance and noticeability, as can be seen in Table 2, while receiving the largest number of positive comments for being not obtrusive, which can be seen in Table 12.

We combine 9 codes to create the theme of update mechanism. The codes and their frequencies in the positive comments left for each image are available in Table 12.

Interestingly, update mechanisms are a popular research topic [3, 26, 27], but as indicated by these results, are not much on the minds of our participants. This could be due to several factors, most improtantly that participants were examining static messages and asked to rate the messages, specifically. This could have biased against responses about mechanisms. Alternatively, the lack of mention about mechanism could indicate that users are generally happy with how updates are applied and so they do not notice it, thus it does not come to mind. More investigation is needed to confirm this notion, though.

Update mechanism features disliked by users

Even fewer participants commented regarding negative aspects of update mechanism than comments about positive aspects of update mechanism. Among those that did, one of the biggest concerns was the use of scare tactics to convince users to update their systems. Participants also commonly complained about the interruption caused by updates, including needing to perform multiple steps to update and having to restart after.

Sample negative comments that are related to message update mechanism can be seen in Table 13. Nine codes were combined to make the update mechanism theme. Table 14 shows the codes and frequency of those codes being assigned to negative comments overall for each image.

Image based response patterns

As indicated in the summaries, it was not uncommon that images that ranked high/low in one or two of the self-reported scales would also garner the most positive/negative comments in the qualitative data. To explore this pattern, we calculated Pearson’s correlation coefficient between the total number of postivie/negative comments for two themes with each image’s average importance, annoyance, confusion, and noticeability ratings. Table 15 shows the resulting calculations. Content and design were the themes focused on because the other theme had relatively few comments related to it in the data.

As shown in Table 15, messages that received more positive comments about design and layout were more likely to have high noticeability (p = 0.55) and importance (p = 0.57) ratings and low confusion and annoyance ratings. This connection is intuitive if one considers the goal of software update and warning messages, which is to be easy to follow, eye-catching, and persuasive. If these are the goals, messages that score high on importance/noticeability (i.e., an eye-catching, persuasive message) and low on annoyance/confusion (i.e., an easy to follow message) are likely “well” designed and thus garner more positive design comments. Even ignoring the goals of message design, this result is intuitive since high ratings imply positive emotions (i.e., high importance and/or noticeability), thus making them more likely to be paired with positive comments.

With this in mind, the number of negative comments regarding design/layout appears to be strongly correlated to annoyance (p = 0.61) and confusion (p = 0.53) while also weakly correlated to importance (p = 0.23). We believe this indicates a similar relationship as with the positive comments. “Well” designed update and warning messages should be clear, pleasant, and persuasive, thus it makes sense that messages that were more annoying and confusing would attract more negative design comments since more negative design comments is intuitively indicative of poor design. Since annoyance and confusion are negative emotions, it is logical that participants would leave more negative comments in conjunction with higher ratings for these emotions.

For content, messages that received higher importance (p = 0.41) and noticeability (p = 0.41) ratings also tended to receive more positive comments regarding message content. Messages that better communicate in the context of software update and warning messages will do so by communicating importance and noticeability since these are goals of such messages. It is possible this relationship explains the correlations between positive content comments (i.e., comments explaining content features participants “like”) and importance and noticeability.

Inversely, images that scored higher in terms of annoyance (p = 0.45) and especially confusion (p = 0.61) were more likely to elicit more negative comments about message content. The connection with these emotions, particularly confusion makes sense considering that a message with poor composed content is likely to be confusing. A key goal of update and warning message content is to be accessible and easy to understand. Messages that fail at this goal are regarded as confusing and a confusing message is intuitively more likely to be seen as annoying for many reasons (e.g., a perceived waste of time in reading a confusing message, frustration due to lack of understanding).

All of these findings are in line with our prior work that found relationships between self-rated levels of confusion/annoyance and importance/noticeability when looking at update messages [29]. The correlations between aspects of design and content with these emotion pairs suggests that these variables could be manipulated in messages to improve their noticeability and importances, while reducing annoyance and/or confusion. How this could be done is an area of further research, but towards that goal, we present some initial directions by identifying correlations between design features and self-reported emotions.

High-impact design features

Finally, we counted the total number of positive and negative comments in different categories for each image and identify if there is any correlation between these numbers and image ratings in terms of annoyance, confusion, perceived importance, and noticeability. Intuitively, if feature ‘x’ somehow relates to annoyance and it is present in certain images, those images should receive a high annoyance rating. Table 16 list the correlation coefficients for positive design features as coded in our analysis. As can be seen, messages that tell the importance/benefits of a message correlate with significantly lower confusion and annoyance (i.e., messages with this feature receives low rating for being annoying and confusing). Similarly, use of color is correlated with higher noticeability.

Table 17 lists the correlation coefficients for negative design features. Messages that are too technical or use too much content are correlated with higher ratings for confusion and annoyance. Scare tactics, commonly used to increase noticeability and importance in messages, correlates with higher annoyance for our participants, which may hurt the message effectiveness in the long run.

General affective-cognitive considerations by participants

In addition to the emotions associated with the messages being viewed, another pattern in the qualitative data brought attention to the importance of emotion in decision making process in both the short and long terms. There were a total of 88 comments in both the positive and negative comment sets that expressed some form of general emotion that was not directly related to the message being viewed.

Some participants expressed a general appreciation or distaste for a particular software company or brand. Two comments that demonstrate this from both a positive and negative viewpoint are:

“I like Google (Image 1)”

“QuickTime is just annoying. (Image 7)”

Users consider their prior experiences with a brand when considering a new message from that brand. Thus, cultivating a good image in the view of users is key because positive emotions when viewing a message due to a user’s “like” for a brand may help that message persuade the user to take an action. On the other hand, bad experiences with a brand, be it due to a faulty update, annoying program, or even bad media attention, can hurt how persuasive new messages from that brand are. A comment that could reflect a bad past experience with updates is:

“I just know as soon as this update is installed, I’m going to be greeted with a whole bunch of problems I didn’t have before. (Image 6)”

The tone of the above comment indicates a fair amount of frustration with some updates in the past. It is very likely this participant would consider in their decision the inconvenience felt in the past when pondering to apply the update presented in the message. Frustration can also manifest in response to the messages as well. For example:

“I am now scared. Something bad will happen if I don’t update. (Image 2)”

This comment underscores the effect update messages can have on users’ fear levels. However, as pointed out by participants, scare tactics may in fact play a negative role while convincing users on the long run. Consider an update that scares a user into applying only to do nothing but change the interface of the program while still leaving vulnerabilities open. In such cases, the interface changes could annoy the user and come off as unnecessary and undeserving of the applied scare tactic. When another update is needed to fix the open vulnerabilities, the same user may be less inclined to apply due to no longer “believing” the level of urgency.

How an individual feels about a message, specifically in terms of annoyance and confusion, has been found in our prior work to sometimes be a predictor of their hesitation in applying the update [29]. Like our results in this paper, other studies have highlighted the effect of previous user frustations with software updates on the users’ perception of updates overall [3, 4]. All these findings suggest that increasing postive associations with updates by combating negative experiences could also be a good step towards increasing update application rates.


Our study aims to uncover the limitations of current software update message designs and mechanisms. Though we have presented several key findings related to the user perception of software update and warning messages, we are unable to extrapolate our findings to the population in general. That said, our findings are well aligned with reported limitations of update messages in prior studies [3, 4, 24, 25, 28, 29]. Specifically, our study, like many before us focuses on a sample from a university population containing many students and staff. Since most of our participants are working towards or have at least a bachelor’s degree, the findings here may not replicate for all other groups. It’s very possible that perceptions of updates could be different based on education and/or computer proficency. Nonetheless, our sample was of active computer users whose opinions about updates are surely invaluable based on their regular experience with them. The findings presented in this report can serve as a starting point for further investigation into the design and delivery of software update and warning messages for the general population by prompting similar investigations that focus on other populations or message designs/types.


In this paper, we perform both quantitative and qualitative studies using existing software update messages to identify software update message design features (e.g., layout, color, content) that may affect users positively or negatively. By analyzing quantitative data and qualitative comments from 155 users, we identified multiple designs features that are highly correlated to messages being confusing, annoying, noticeable, and important. We also identified a general negative attitude associated with software warning and update messages, often due to negative past experience with specific software update. Though current messages do make mistakes and cause users negative emotions/decisions, our findings show that some messages are well designed and can be used, with human behavior theory in mind, to identify features that should be most effective at convincing users to update.


  1. 1.

    Norton by Symantec. Why security updates are vital. Accessed 1 Aug 2014

  2. 2.

    MIT Information Systems and Technology. Software patches and os updates. Accessed 1 Aug 2014

  3. 3.

    Vaniea KE, Rader E, Wash R (2014) Betrayed by updates: how negative experiences affect future security. In: Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, CHI ’14. New York: ACM, pp 2671–2674

  4. 4.

    Wash R, Rader E, Vaniea K, Rizor M. Out of the loop: How automated software updates cause unintended security consequences. USENIX Association, 9999, pp 89–104

  5. 5.

    Internet social networking risks. Accessed 18 Sep 2013

  6. 6.

    Jakobsson M, Myers S (2006) Phishing and countermeasures: understanding the increasing problem of electronic identity theft. USA: Wiley

  7. 7.

    Sheng S, Magnien B, Kumaraguru P, Acquisti A, Cranor LF, Hong J, Nunge E (2007) Anti-phishing phil: the design and evaluation of a game that teaches people not to fall for phish. In: Proceedings of the 3rd symposium on Usable privacy and security. ACM, pp 88–99

  8. 8.

    Harbach M, Fahl S, Muders T, Smith M (2012) Towards measuring warning readability. In: Proceedings of the 2012 ACM conference on Computer and communications security. ACM, pp 989–991

  9. 9.

    Curt C Braun, Stephanie A Glusker, Ronda S Holt, Clayton Silver N (1995) Adding consequence information to product instructions: changes in hazard perceptions. In: Proceedings of the human factors and ergonomics society annual meeting, vol 39, pp 346–349. SAGE Publications

  10. 10.

    Edworthy J, Austin SA (1996) Warning design: a research prospective. CRC Press, Boca Raton

    Google Scholar 

  11. 11.

    Sojourner RJ, Wogalter MS (1998) The influence of pictorials on the comprehension and recall of pharmaceutical safety and warning information. Int J Cogn Ergon 2(1/2):93–106

    Google Scholar 

  12. 12.

    Wogalter MS, Brelsford JW (1994) Incidental exposure to rotating warnings on alcoholic beverage labels. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 38. SAGE Publications, pp 374–378

  13. 13.

    Wogalter MS, DeJoy D, Laughery KR (1999) Warnings and risk communication. CRC Press

  14. 14.

    Wogalter MS, Dietrich DA (1995) Enhancing label readability for over-the-counter pharmaceuticals by elderly consumers. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 39, SAGE Publications, pp 143–147

  15. 15.

    Wogalter MS, Shaver EF (2001) Evaluation of list vs. paragraph text format on search time for warning symptoms in a product manual. Adv Occup Ergon Safety 4:434–438

  16. 16.

    Young SL, Wogalter MS (1998) Relative importance of different verbal components in conveying hazard-level information in warnings. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 42, SAGE Publications. pp 1063–1067

  17. 17.

    Kline PB, Braun CC, Peterson N, Silver NC (1993) The impact of color on warnings research. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 37. SAGE Publications, pp 940–944

  18. 18.

    Wogalter MS, Kalsher MJ, Rashid R (1999) Effect of signal word and source attribution on judgments of warning credibility and compliance likelihood. Int J Ind Ergon 24(2):185–192

    Article  Google Scholar 

  19. 19.

    Zobel GP (1998) Warning tone selection for a reverse parking aid system. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 42, SAGE Publications, pp 1242–1246

  20. 20.

    Frantz JP (1993) Effect of location and presentation format on attention to and compliance with product warnings and instructions. J Safety Res 24(3):131–154

    Article  Google Scholar 

  21. 21.

    Wogalter MJ, Racicot BM, Kalsher MJ, Simpson SN (1994) Personalization of warning signs: the role of perceived relevance on behavioral compliance. Int J Ind Ergon 14(3):233–242

    Article  Google Scholar 

  22. 22.

    Gardner-Bonneau DJ, Kabbara F, Hwang M, Bean H, Gantt M, Hartshorn K, Howell J, Spence R (1989) Cigarette warnings: recall of content as a function of gender, message context, smoking habits and time. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 33. SAGE Publications, pp 928–930

  23. 23.

    Haugtvedt CP, Petty RE (1992) Personality and persuasion: Need for cognition moderates the persistence and resistance of attitude changes. J Pers Soc Psychol 63(2):308

    Article  Google Scholar 

  24. 24.

    Felt AP, Elizabeth HA, Egelman S, Haney A, Chin E, Wagner D (2012) Android permissions: user attention, comprehension, and behavior. In: Proceedings of the Eighth Symposium on Usable Privacy and Security, p 3. ACM

  25. 25.

    Harbach M, Fahl S, Yakovleva P, Smith M (2013) Sorry, i dont get it: an analysis of warning message texts. In: Financial cryptography and data security. Springer, pp 94–111

  26. 26.

    Parthesarathy S, Fink R, Flynn SL, Sun R (2002) Software update notification. US Patent 6,353,926

  27. 27.

    Reha MK, Morris CF (2001) Software update manager, August 28 2001. US Patent 6,282,709

  28. 28.

    Harbach M, Hettig M, Weber S, Smith M (2014) Using personal examples to improve risk communication for security and privacy decisions. In: Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, pp 2647–2656

  29. 29.

    Fagan Michael, Khan Mohammad Maifi Hasan, Buck Ross (2015) A study of users experiences and beliefs about software update messages. Comput Hum Behav 51:504–519

    Article  Google Scholar 

Download references

Authors’ contributions

MF and MK designed the study and prepared the necessary materials for execution. MF executed the study and collected data from participants. NN and MK manually analyzed the resulting data, developed descriptive codes, and refined those codes into the themes presented. MF and MK wrote and revised the resulting report presented in this research paper. All authors read and approved the final manuscript.


This work is supported by the National Science Foundation under Grant no. CNS-1343766 and GAAAN Fellowship no. P200A100141 and P200A130153. Any opinions, findings, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.

Competing interests

The authors declare that they have no competing interests.

Author information



Corresponding author

Correspondence to Michael Fagan.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Fagan, M., Khan, M.M.H. & Nguyen, N. How does this message make you feel? A study of user perspectives on software update/warning message design. Hum. Cent. Comput. Inf. Sci. 5, 36 (2015).

Download citation


  • Update message design
  • Affective-cognitive design
  • User opinions
  • User experiences