[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Public Access

Building Personalized Trust: Discovering What Makes One Trust and Act on Facebook Posts

Published: 08 October 2021 Publication History

Abstract

What makes one trust or distrust a post on Facebook? What makes one willing to take an action on the post, such as sharing it with friends, following its advice, or even making a donation for its cause? We hypothesize that personal factors in addition to the characteristics of a post would be associated with one's trust and willingness to take actions on the post. To test our hypothesis, we have examined the relationships between a wide variety of personal factors and how users perceived the credibility of a post on Facebook and their associated behavior—willingness to act on the post with the acceptance of potential risks. Our results from a study involving 400 participants reveal several new findings. Notably, beyond message properties, we have discovered that personal factors are significantly associated with how users perceived the credibility of Facebook posts. Not only do our findings extend prior work on studying the relationships between information properties and users' trust in Facebook posts, but they also indicate the potential of tailoring intended messages to an audience based on their personal factors for establishing desired trust. Moreover, we have found that a combination of personal factors is associated more with users' willingness to act on a post than their perceptions of credibility alone. This finding deepens the understanding of users' behavior on Facebook beyond learning their subjective perceptions. It also informs how to potentially tailor intended messages to an audience based on the audience's personal factors so that the audience can act on the messages beyond just perceiving the credibility of the messages. We also discuss the implications of our findings for building personalized trust between message authors and readers on Facebook and beyond.

1 Introduction

Online information, including social media, has become an increasingly important source of information in people's everyday decision-making processes. For example, Yelp reviews often affect our restaurant choices, while health tips posted on Facebook may shape our diet plans. While people often use such information to influence decisions, not all such information is trustworthy, as it may contain rumors and false information [43]. Currently, a great deal of media attention has focused on the proliferation of online misinformation, for example, the coronavirus misinformation that spread on Facebook.1
To understand what makes people trust or distrust online information, researchers have conducted numerous credibility perception studies (e.g., [43, 44, 48]). These studies show that various message-related properties, such as content [15] and username [48], affect how people perceive the credibility of a message. Factors other than message properties, such as readers' demographics [24], media use habits [24], and culture [59, 61], also have been shown to affect readers' perceptions of credibility.
However, there is little known about how personal factors, including personal knowledge and individual traits such as personality, would be associated with one's perception of the credibility of information on social media. Understanding such relationships can benefit both readers and authors of social media messages. Specifically, it can help readers better assess the credibility of messages and make more conscientious decisions. For example, impulsive readers may be reminded to check out the facts of a message before acting on the message too quickly. Similarly, it can help authors enhance their credibility by tailoring their messages based on target readers' characteristics. For example, authors may be advised to present facts to analytical readers but tell a story to emotional readers to gain their trust, respectively.
In many situations, not only do authors of messages wish their audience to perceive their intended messages as credible, but they also hope the audience will act on the messages accordingly. For example, the owner of an e-commerce website not only wishes potential consumers to perceive the product information on the site as credible but also hopes the consumers will actually buy the products [39, 40]. Similarly, in an emergency situation, volunteers may post messages on social media to help an affected population stay clear of danger [5]. In such a situation, the message authors wish that their audience perceives the messages to be credible and to act on them as instructed.
While there is much work on understanding how users perceive the credibility of social media information (e.g., [59, 61]), there is little research on how such users' perceptions of credibility are related to acting on such information, especially if risks are involved (e.g., sharing a rumor on social media may damage one's reputation). In addition, existing research indicates that perceptions of credibility may not always result in actions, especially when risks are involved [36, 50, 56]. Given that taking action with online information may involve risks, it is little known what factors beyond people's perceptions of credibility might affect users' actions on social media information despite the potential risks involved.
To deepen the understanding of users' trust in online information, especially on social media and to help build personalized trust between message authors and readers, we address two research questions in this article that have not been previously addressed:
1.
RQ1: What factors—including personal factors that characterize individual differences, such as knowledge, experience, and personality—predict one's perception of credibility of a social media message?
2.
RQ2: What factors, including personal factors beyond one's perception of credibility of information, predict one's willingness to act on a social media message despite potential risks?
To address these two questions, we have designed and conducted an action-based study involving 400 users. During this study, we solicited the users' perceptions of credibility of a set of Facebook posts (i.e., their trust rating on the posts) and their willingness to act on such posts (e.g., making a donation) with concrete measures (e.g., taking the donation from a willing participant during the study). We also explicitly asked the participants what made them trust or distrust a post and take or not take an associated action. From the participants' responses, we extracted a set of trust factors and quantified the relationships between these factors and the participants' perceptions of credibility and willingness to take an action.
Our findings from the analyses help answer the two research questions and contribute new and valuable insights for building future interactive, intelligent systems that foster personalized trust building between message authors and readers in a socio-digital context. In particular, the answers to the first question indicate what factors are associated with one's perception of credibility of social media messages and how to potentially tailor intended messages to target readers based on the readers' personal characteristics, to gain the readers' trust in the messages. The answers to the second question indicate what factors are associated with one's willingness to act and potentially how to personalize intended messages for target readers based on their characteristics, to encourage the readers to act on the messages. As a result, our work offers three unique contributions:
We have extended prior work on understanding users' perceptions of social media credibility by discovering what, and how, personal factors are related to such users' perceptions of Facebook posts.
Our research has extended beyond the understanding of users' subjective credibility perceptions of Facebook posts by uncovering what factors are associated with their willingness to act on these messages, especially with the acceptance of potential risks.
Our findings highlight the relationships of personal factors with both users' credibility perceptions and risk-taking behavior, which can then be used to help build future interactive, intelligent systems that foster personalized trust between message authors and readers in a socio-digital context.
The rest of the article is organized as follows. We first provide an overview of related work before describing our study design. We then present the study results and discuss how to apply our findings to create future interactive, intelligent systems for facilitating personalized trust-building between message authors and readers. Finally, we discuss limitations and future work.

2 Related Work

Our work presented here is related to three areas of research that studies human trust, including established key trust measures and factors affecting human trust in various contexts.

2.1 Measuring Trust

Trust is a complex concept and its multidisciplinary focus [50] has resulted in multiple ways to measure trust (e.g., [19, 30, 57]). Since enumerating all trust measures is out of the scope of this article, we highlight two measures most related to our study.
The most widely used trust measure is perception of credibility [19]. This measure has been used widely in numerous trust-related studies, including those in a digital context. For example, there are studies on how users perceived the credibility of computer interfaces [15, 34], blogs [13], websites [39], and product/service providers [10]. More recent studies have examined how users perceived the credibility of social media messages and authors [42, 48], including those from different cultures [59, 61]. Following these works, we also examine perception of credibility in our study but with a focus on understanding how individual differences affect perception of credibility.
Since trust is a complex construct and often influences people's behavior, estimating trust by credibility perception alone is often inadequate [36, 57]. Therefore, researchers have developed additional measures. A notable one is to measure one's trusting behavior—one's willingness to be vulnerable or willingness to act with the acceptance of risks and uncertainty [57]. Since this measure estimates one's trust beyond subjective perception, we adopt it in our study. Similar to the trust studies in e-commerce [39, 58], which examine consumer trust by examining the willingness to buy, we study users' trust in social media messages by examining their willingness to act on these messages. Since our context is different, the user actions also differ from those in e-commerce.
In our study, we thus gauge one's subjective perception in a social media message as well as the associated willingness to act on the message to deepen our understanding of people's trust in social media.

2.2 Trust Antecedents and Their Effects

Researchers have also investigated trust antecedents—factors that affect one's trust, in various situations, ranging from business management [36] to e-commerce [39]. In general, trust antecedents fall into two broad categories: human-related and object-related.
Human-related trust antecedents comprise a set of human characteristics that influence people's trust in information or other humans. For example, one's demographics and political views influence one's trust in online blogs [24], while one's habits and experiences affect one's trust in computing technologies [56]. Moreover, a person's individual characteristics, such as personality traits [38, 44], cultural background [61], and social structure [16], impact interpersonal trust relationships. Human-related trust antecedents can be further divided into two sets: characteristics of trust givers and trust receivers. For example, in e-commerce, one study finds that characteristics of trust receivers (providers of online goods and services who receive consumer trust), such as their competence and motivation, impact consumer trust [40]. The same study also finds that personal factors of trust givers (consumers who entrust an e-commerce site to make purchases), such as their web experience and personal innovativeness, impact their trust [40]. Compared to these existing studies, our focus here is on studying human-related trust antecedents associated with trust givers—message readers—on social media, which has been less studied.
On the other hand, object-related trust antecedents characterize various objects involved in a trust relationship, such as computer interfaces [56], websites [15], and social media messages [6, 43, 48]. For example, one study finds that the design of a website affects the perceived credibility of the site [15]. More recent studies have found that social media message properties—such as message topics [43], user screen names [48], and hash tags [25]—affect the perceived credibility of the social media messages. Our work is closely related to these recent studies that examine perceived credibility of social media messages. However, our study extends beyond these works by also examining how various factors affect a person's willingness to act on social media messages.

2.3 Personal Traits and Trust

Personal traits, such as personality and cognitive style, influence one's perception and behavior in the real world [17]. Since trust is often estimated by people's perceptions of credibility and associated behavior [30, 57], intuitively, personal traits should influence trust. Psychological studies have indeed shown such evidence. For example, one study suggests that Agreeableness, a Big Five personality trait [11], is linked to one's propensity to trust [44]. In e-commerce, a study shows that personal Innovativeness, a trait similar to Openness in the Big 5 personality model [11] and indicating one's openness to new ideas, influences one's trust in e-commerce sites [39]. Emotion is also found to be a factor affecting trust [12]. In addition, trust has a cognitive foundation, since one's knowledge and reasoning often serve as the basis to trust or distrust (e.g., [27, 38]).
Although there is clear evidence linking trust and personal traits, such evidence is mainly observed in a face-to-face trust context (e.g., organizational trust [36, 38]). More studies started examining such connections in online settings, such as e-commerce [18]. However, little work has been done on examining these connections in the context of social media, where user-generated content creates a less curated, more uncertain context [41]. Thus, our study of linking personal traits to trust in social media extends prior work to a new context.

3 Study Design

To answer our two research questions, we designed a study to examine people's trust in social media messages by two aspects: (1) their perceptions of message credibility and (2) their willingness to act on a message. Since trust is a personal choice or expectation [38], we are especially interested in uncovering what and how personal characteristics affect the two aspects. As a result, the findings of our study not only uncover what factors, including personal characteristics, affect people's perceptions of credibility on social media but also reveal what factors influence their willingness to use that belief to act on such messages [30]. Furthermore, we hope to use the findings to build interactive, intelligent systems that can facilitate personalized trust assessment and development between message authors and readers in a socio-digital context.
Before describing our study design, we first define a set of key terms used throughout this article to facilitate the understanding of our work.

3.1 Definition of Terms

Trust is a set of beliefs and the willingness to use such beliefs for risk-taking behavior [50, 57]. In our context, the belief is one's perception of credibility of a social media message, while risk-taking behavior is one's willingness to act on such a message with the acceptance of potential risks (e.g., financial or reputation risks). For simplicity, we consider perceived credibility as one of the trust measures as used in a rich body of trust-related studies across many contexts [36, 38, 41, 43, 48, 56].
Personal factors. Personal factors broadly refer to various attributes associated with a person, which define such a person as a unique individual. These attributes include demographics, personality traits [11], cognitive style [33], and personal experiences or knowledge [24]. Since much research has already shown that personal factors impact one's trust in online information or interpersonal relationships, we investigate which personal factors may be related to one's perceptions of credibility of social media messages and the willingness to act on such messages. In this article, personal factors, individual differences, and personal characteristics are used interchangeably.
Trust antecedents. In our context, trust antecedents refer to a set of factors that affect one's perception of credibility of a social media message. Based on studies of one's trust in online information (e.g., [24, 59, 61], we hypothesize that trust antecedents in our context include both personal factors as defined above and message properties identified by others (e.g., the source and form of a message [43]).

3.2 Survey Design

The goal of our study is twofold. First, we want to uncover what personal factors besides message properties are associated with one's perception of credibility of a social media message and how they do so. Second, we want to uncover which personal factors, beyond one's perception of credibility of a message, are related to one's willingness to act on a message with the acceptance of potential risks and how they do so. To achieve our study goals, we developed a survey that collected information from a participant on three aspects: (1) perceived credibility of a social media message, (2) willingness to act on the message, and (3) personal traits measured by psychometric tests. Table 1 lists the survey questions and the two psychometric tests used to measure two types of personal characteristics.
Table 1.
Q1:How much do you trust this post?
Q2:What makes you trust or distrust this post? Please explain it with a minimum of 20 words.
Q3:Given your perception of the post, do you trust the post enough to take one or more actions below? Select all that are applicable.
(a)Share this post with your friends on Facebook
(b)Leave your email address for future contact on further studies on this topic ____________________
(c)Donate $0.05 to further the research/investigation on this topic (your donation will be deducted from the bonus you earn)
Q4:What makes you take or not take the actions above? Please explain it with a minimum of 20 words.
Big 5 Personality Questionnaire (IPIP 50 items)
Basic Values Scale Questionnaire (20 items)
Table 1. Survey Questions
Before describing our survey elements, we first explain the selection of social media messages used in our study.
Selection of Social Media Messages. To situate our study in a realistic setting, we used posts on public Facebook pages for several reasons. First, increasing numbers of organizations and groups use public Facebook pages to engage with their target audience and build a user community. Figure 1 shows a public Facebook page called Daily Health Tips,2 one of the sites we used to select our posts. Also, increasingly more people routinely read and interact with posts on public Facebook pages like the ones we used in our study. Second, Facebook pages cover diverse topics, which enables us to examine the effect of different topics on people's trust in these posts. Third, Facebook is a social platform and posts on the Facebook pages naturally gather realistic social proof. This lets us study how social proof may affect people's trust in these posts.
Fig. 1.
Fig. 1. The left image (a) shows a public Facebook page called Daily Health Tips. The right image (b) shows an example post selected from the Daily Health Tips page and used in our study.
To balance the coverage and cost of the study, we selected a total of six posts on Facebook by a set of criteria. First, we selected fairly recent posts so that they were not too obsolete and could still pique our participants' interests. Second, we chose controversial posts that had already generated many polarized comments. Third, we selected posts to cover different topics, including those that are most susceptible to false information online [42]. The last two criteria were used to make a person's task in judging the credibility of a post more difficult but also more realistic. Table 2 lists all six Facebook posts used in our study. Figure 1 shows an example Facebook page and a post used in our study. It is worth pointing out that the posts that we selected in the study contain informative content and are very different from sponsored ads often appearing in a user's news feed.
Table 2.
TopicContentSource
EntertainmentA news article on actor Wentworth Miller's coming out as gay.https://www.facebook.com/thehollywoodgossip
HealthA health tip on the use of mangos to prevent obesity.https://www.facebook.com/DailyHealthTips
HealthA recipe on making a healing tonic for curing inflammation.https://www.facebook.com/DailyHealthTips
PoliticsA news article about the job situation in the U.S.https://www.facebook.com/HuffingtonPost
ScienceAn article on how hugging leads to the release of a trust-inducing hormone oxytocin.https://www.facebook.com/itscrazyfacts
ScienceAn article discussing no evidence found for left vs. right brain dominancehttps://www.facebook.com/TheScienceExplorer
Table 2. Facebook Posts used in Our Study
Measuring Perceived Credibility. To solicit a participant's perception of credibility of a post and the factors that might affect the individual's perception, we asked the participant to read a presented Facebook post and answer two open-ended questions (Q1 and Q2 in Table 1). We used an open-ended question format for three reasons: (1) we wished to obtain natural responses from the participants without biasing them with any other information; (2) we hoped to discover new trust antecedents from the participants' responses; and (3) from our experience, crowd workers are more attentive to open-ended than multiple-choice questions and often offer richer information in their responses to open-ended questions than what multiple-choice questions could collect.
Measuring Willingness to Act. To model willingness to act on a Facebook post, we designed another two questions (Q3 and Q4 in Table 1). In particular, we designed three actions and asked participants to report their willingness to take these three actions:
Share. Share a post with Facebook friends.
Contact. Leave an email for future contact on further studies related to the message.
Donate. Donate for a cause related to the message (e.g., using the money to further scientific research on the topic).
We chose these three actions based on a set of criteria. First, each action bears potential risks. For example, sharing a questionable post on Facebook may risk one's personal reputation or risk being ridiculed. Similarly, leaving a personal email may subject a person to potential spam. This criterion is to ensure that our study measures perceived risk-taking behavior. Second, these actions are associated with different types of risks: personal burdens (e.g., email spam), reputation loss, and financial risks. This is to examine what might affect users' actions associated with different types of risks. Third, each action is associated with a different level of risk. Specifically, the action Share bore merely a perceived risk since a participant was not required to share a post on Facebook on the spot. In contrast, the action Contact went a step further by asking a participant to leave a personal email address, while the action Donate made the risk realistic and immediate by taking a participant's donation right away (i.e., deducting from the money earned by the participant; see more below). Given the different level of risk involved, we hoped that users would weigh different actions carefully, which enabled us to better understand what might affect users' willingness to act when associated with different levels of risks.
Measuring Personal Traits. Based on previous trust studies (e.g., [13, 38, 44]), we hypothesized that several types of personal characteristics might affect one's trust in social media, such as personal experience and personality. Since not all of the traits can be easily measured (e.g., personal experience), in our study, we decided to measure two types of traits that can be measured by psychometric surveys reliably and have been shown to influence one's trust in various situations (e.g., [13, 38, 44]): personality traits and basic values.
Big 5 Personality. Personality defines one's unique and enduring characteristics and qualities. Previous studies found personality traits, such as Agreeableness and Openness, to impact interpersonal trust [38, 44] as well as one's trust in online information [39]. We hypothesized that such traits would also impact one's trust in social media messages. To measure personality, we adopted the most widely used Big 5 Personality model [11] and asked each participant to take the 50-item Big 5 personality survey [23]. This model measures a person's characteristics from five aspects:
Openness: How willing one is to experience new things
Conscientiousness: How persistent, disciplined, and cautious one is
Extroversion: How sociable or excitable one is
Agreeableness: How compliant and trusting one is
Neuroticism: How emotionally stable and tough one is
We hypothesized that people who are open to new experiences (higher in Openness) and who are compliant and trusting others (higher in Agreeableness) are more likely to trust information on social media or act on a social media message when asked to. In contrast, people who are accustomed to traditional means of receiving information and interacting with others, or people who are more skeptical, may be less willing to trust information on social media let alone to act on such information.
Basic Human Values. A person's basic values, such as respecting tradition versus valuing change, influence one's perception as well as behavior [53]. To measure values, we used the 21-item psychometric survey to gauge a participant's ten basic values [52]. Table 3 summarizes these ten dimensions that characterize one's values or desires in real life.
Table 3.
DimensionDefinitionDimensionDefinition
Self-directionMeasures how much one values independence and autonomySecurityMeasures how much one values stability and harmony
StimulationMeasures how much one craves excitement and challenges in lifeConformityMeasures how much one desires restraint of actions or impulses that harm others or violate social expectations or norms
HedonismMeasures how much one values pleasure or gratification for oneselfTraditionMeasures how much one respects customs and ideas aligned with one's culture or religion
AchievementMeasures how much one values personal success through demonstrating competenceBenevolenceMeasures how much one values enhancing or preserving the welfare of those coming in close contact (e.g., family)
PowerMeasures how much one craves social status and control over people and resourcesUniversalismMeasures how much one values appreciation and protection of the welfare of all people or nature
Table 3. Ten Dimensions Measuring Basic Human Values [53]
We hypothesized that values would impact one's trust in social media. Specifically, we hypothesized that people who value self-direction or stimulation may be more willing to believe in information appearing in unconventional channels such as social media and take risks on such information. We also hypothesized that those who value tradition are more skeptical about such information on unconventional channels such as social media and, thus, would be less likely to take actions on such information. In contrast, a person who values change might be less skeptical of information obtained from unconventional channels such as social media.

4 Participants and Method

We recruited our study participants through Amazon Mechanical Turk, requiring the participants with the minimal task approval rate of 98%. In order to control for cultural influence, we limited participants to only those who live in the United States. We also ensured that each participant took our survey once. To ensure the quality of survey responses, we used additional questions to qualify a participant. The qualification questions asked a participant to describe a post, ensuring that the participant paid attention to our survey and spent adequate time to provide quality answers. Participants were paid $1.35 and a bonus of up to $0.15 after their answers were verified as complete. We requested 400 participant responses.
To reduce the length of the survey and ensure the quality of the survey on Mechanical Turk,3 we asked each participant to assess three out of the six posts we selected. Each post was provided as a URL to the original post on Facebook. This ensured that the participants read the original posts and others' comments on the post. For each of the six posts, participants were asked whether they were willing to share the post (Share) or leave a personal email for future contact (Contact). For three of the six posts,4 participants were asked whether they were willing to donate (Donate) $0.05 for future research on the topics. For the Donate action, we told the participants that $0.05 would be deducted from their earned bonus. Although $0.05 might seem small, it is not insignificant for participants, since 70% of tasks on Amazon Mechanical Turk pay $0.05 or less per task [22]. At the end of each survey, we collected a participant's demographics, including age and gender.

5 Study Results and Analyses

After manually approving each survey response, we accepted 399 qualified responses. One response was discarded due to its gibberish input. Based on our collected demographic information, there were 54% (214) females and 46% (185) males. While 77% (309) of participants were in the age group of 25 to 54, 12% (47) were 18 to 24, and 10% (40) were 55+. Figure 2 shows the demographic breakdown of household income and education. We then analyzed the 399 qualified responses to answer our two research questions. We first present an overview of the results and then our detailed analysis around the two research questions.
Fig. 2.
Fig. 2. Demographics of education and household income in our sample.

5.1 Qualitative Analysis of Survey Results

To answer our first research question, what factors predict one's perception of credibility of a post, we first analyzed participants' open-ended responses to Q1: How much do you trust this post? (Table 1) to identify their perceptions of credibility. Coding of the results revealed seven levels of perceived credibility (Table 4). The distribution of the perceived credibility is slightly bimodal, where most participants perceived the Facebook posts to be more or less credible (i.e., levels 6 and 7). Ambiguous perceptions (i.e., level 3 being “not sure” or level 4 “slightly”) were less common, which indicated that most participants were fairly certain of their assessments on the credibility of the posts.
Table 4.
CodeCoded Description%Example Responses
1No16.5%No, I don't believe it.”; “Absolutely not!
2Skeptical5.5%Skeptical and leaning negative…”
3Not sure9.2%I am not sure.”; “I don't know if I believe it or not.
4Slightly1%I don't believe it is a cure but it might help some people.”;I tend to believe it a little.
5Partially8%There seems to be some reason behind it explained.” “It's possibly true.
6Probably13.9%This health tip is probably true.”; “Probably, … this is not really anything eyebrow raising.”
7Yes45.8%I definitely believe this health tip.”; “I so believe this!
Table 4. Seven Coded Levels of Perceived Credibility of Facebook Posts Based on Participants' Responses
Note: The normalized distribution of the 7-level coded perceived credibility is shown in Figure 3.
To identify the factors explaining one's perception of credibility, we then analyzed participants' open-ended responses to Q2: What made you trust or distrust a post? (Table 1). Following a qualitative content analysis approach [21], five experts, who are HCI researchers and are familiar with coding qualitative user responses, each analyzed 300 responses independently and came up with a coding scheme, including the level of perceived credibility and a list of factors. They then consolidated their coding schemes in several iterations. Based on the agreed coding schemes, two experts coded all of the assessments (399 × 3). The coding results achieved a strong agreement (Kripendorff's α = 0.96).
The coding revealed that eight trust factors (potential trust antecedents) affected the participants' perceptions of credibility from their own viewpoints. As shown in Table 5, these eight factors fall into two categories: personal factors and message properties. Personal factors were cited more frequently than message properties, and the most cited antecedent was Personal Knowledge, that is, 56% of the responses cited it as a factor that made a participant trust or distrust a post.
Table 5.
CategoryTrust AntecedentExample Responses% (# of responses)
Personal FactorPersonal KnowledgeI have read previously about oxytocin and bonding but I'd want to find out more before deciding about this statement in particular.”56% (223)
Personal FactorPersonal ExperienceI will probably believe once I knew it worked by trying it myself.”19% (76)
Personal FactorFeelingsThis could be true… I have no reason to doubt it.19% (76)
Personal FactorInferenceIf it's not true, someone is going to get their ass sued pretty bad. Foolish to lie about something like that.14% (56)
Message PropertiesMessage SourceIt goes to the huffington post and they seem pretty trustworthy.”26% (104)
Message PropertiesContent QualityIt seems like a gross oversimplification.18% (72)
Message PropertiesSocial SignalsSo many comment as to the truth of it. I don't see any denials of the fact.”3% (12)
Message PropertiesMessage FormNo references were listed. The image was not professional. I do not know if the information was accurate or not.”2% (8)
Table 5. Eight Trust Factors (Antecedents) Extracted from Participants' Open-ended Responses to Q2 (What Caused Them for their Perceptions of Credibility)
Note: These factors fall into two categories: personal factors and message properties. The last column lists the percentage and number of responses (out of a total of 399 responses) where each factor was mentioned.
To answer our second main research question—What factors predict one's willingness to act on a post?—we analyzed the participants' responses to Q3: Given your perception of the post, do you trust the post enough to take one or more actions below? (Table 1). Of all of the responses, 25.7% indicated their unwillingness to take any actions. The distribution among the rest was: Share (28%), Contact (66%), and Donate (16%). Participants were most unwilling to make a donation, which matched our hypothesis, since the risk was real and immediate (i.e., their donation was taken on the spot). In addition, the participants most likely would need the money since their main goal of coming to Amazon was to earn money by doing micro-tasks. On the other hand, their goal of earning money might have also motivated their willingness for leaving an email for future contact, which might present future earning opportunities.
To discover what made the participants' willingness to act, we analyzed their responses to Q4: What made you take or not take the actions above? (Table 1). Following a similar approach to analyzing responses to Q2, two coders first read the responses independently and then developed a coding scheme through multiple iterations. Using the agreed-on coding scheme, they then coded all responses. The intercoder reliability tests indicated a strong reliability of the results: Kripendorffs α ranged from 0.77 to 0.85, with an average of 0.82 across all 3 actions. The coding results identified four categories of responses, called personal motivators, which affected participants' willingness to act on a post from the participants' viewpoints (Table 6).
Table 6.
MotivatorDefinitionExample Responses
BeliefOne's belief related to a post or taking an action on the postI would not be throwing money away on something I don't believe.
HabitOne's habit of doing things related to taking an action.I don't share such gossips on my social network.
BenefitPotential benefits (e.g., helping self or others) generated by taking an action.“It seems…useful and interesting so I would share it.”
CostPotential cost (e.g., financial, time, reputation) incurred by taking an action.“I don't like spamming people…with things I am not certain of.”
Table 6. Four Personal Motivators Extracted from Participants' Responses that Affected their Willingness to Act

5.2 Factors Predicting Perceived Credibility

As shown in Table 5, personal factors, such as personal knowledge and experiences, were cited much more frequently than message-related factors. In this section, we explain each factor in more detail and use examples to show how they had explained the participants' perceptions of credibility of a post. We then present the quantified relationships of these factors on one's perception of credibility of a post.
Personal Factors. The first factor related to one's perception of credibility is one's existing personal knowledge (56% responses). Specifically, having prior knowledge related to a given post made the post more believable. For example, one participant mentioned: “Yes, I do believe this tip. I have heard it before….” On the other hand, a lack of knowledge on the topic of a post often led to doubts. For example, one wrote: “I am not sure. I don't know…. I would have to research it more to decide….” Similarly, inconsistencies between one's prior knowledge and the message resulted in distrust: “I have been taught differently. I don't think this post is trustworthy.”
The participants also indicated that personal experience consistent with and relevant to a post's message affected their perceptions of credibility of a post. For example, one commented: “Yes, I believe this health tip. I've done some of my own….” On the other hand, a lack of relevant experience or inconsistencies between one's prior experience and the content of a post impeded trust. For example, one participant stated: “I am not sure… I would have to try it myself…,” while another wrote: “No, I don't believe this… I never experienced it….”
Two other aspects revealed in the coding that affected their perceptions of credibility of a post were feelings and inference, that is, participants' preferred ways of reasoning and processing information. In particular, 19% judged the credibility of a post based on their gut feelings or intuition. For example, one wrote: “This could be true… I have no reason not to believe it,” while the other noted “I do believe this tip, … I honestly can see how it may work.” Their own “thoughts” guided their belief: “I think this … unbelievable…” or “It sounds plausible.” Opposite to the feeling-based judgment, 14% inferred about the credibility of a post. For example, one judged the credibility of a post by reasoning about its potential liability: “I believe it… If it's not true, someone is going to get their ass sued….”. Just like one's cognitive style—typical mode of thinking—influences trust in interpersonal relationships [38], our findings show that one's thinking style also is related to trust in social media messages. This further implies that various cognitive heuristics (e.g., checking the consistency of a message [41]) may be used to help people better assess the credibility of online information based on their thinking style (see Section 6.1).
Message Properties. In addition to personal factors, the second category of factors mentioned by the participants is related to a post itself, including Message Source, Content Quality, and associated Social Signals. Among these factors, Message Source was the factor most frequently mentioned in the responses. For example, many stated that they trusted a post from Huffington Post, because they knew and trusted Huffington Post: “It was written by the huffington post. They're pretty trustworthy.” On the other hand, many did not trust our entertainment post, which was from a site called The Hollywood Gossip, because they did not know the source and were suspicious of the word “gossip” in the source name. The participants did not trust informal sources either. For example, one stated “I absolutely do not believe this post because the link that is provided in the post is just a mere blog. It is not credible,” while the other responded “I don't believe this … because it is on Facebook, and anybody can post anything on Facebook and make it look legitimate.”
Two other factors, Content Quality and Message Form, were also mentioned by the participants to affect their perceptions of credibility. In our findings, Content Quality referred to the amount of details and evidence provided to substantiate the claims in a post. When a post lacked adequate details, the participants perceived the post to be untrustworthy. For example, one participant did not think a post credible because “It seems like a gross oversimplification.” The quality of the form also impacted perception of credibility of a post. If the form of a post appeared shoddy, such as the use of low-quality images, the participants doubted its credibility: “The information is not trustworthy… the image was not professional.”
In general, we observed that the participants often used negative message properties (about 36% of the time when the participants did not believe a post), such as poor quality in content or message form, to discount their trust in a post. However, they rarely used positive message properties (e.g., a great illustration) except Message Source to endorse their trust in a post. This suggests that message properties alone are insufficient for the participants to judge message credibility.
It is also interesting to note that only 3% of responses suggested that relevant Social Signals (i.e., others' comments on a post) are associated with their perceptions of credibility of the post. One explanation might be that since the comments came from total strangers on Facebook public pages, most of the participants simply did not care much about such social signals. The situation might be different if the social signals came from people whom they knew.
Quantitative Analysis of Factors on Perceived Credibility. We first examined how the eight trust antecedents (Table 5) are associated with perceived credibility. We entered the eight trust antecedents as independent variables in a stepwise regression to predict Perceived Credibility5 with Age and Gender as control variables. We checked for multicollinearity and the VIFs showed it was not a problem. The stepwise regression model revealed that of the eight antecedents, only Content Quality and Feelings had a significant but small association with perceived credibility (Table 7, left columns). Std Est is the standardized beta coefficient. Controls of Age and Gender were not significant.
Table 7.
Perceived Credibility
PredictorsEstimateStd ErrorStd EsttpPredictorsEstimateStd ErrStd Esttp
            
Constant5.33.10 52.68.001Constant3.55.42 8.49.001
Feelings–.82.29–.14–2.84.001Achievement.93.36.132.57.011
Content Quality–1.80.40–.22–4.47.001Benevolence1.12.48.122.33.021
            
F(2, 392) = 13.54, p < .001, adj. R2 = .06(F(2, 392) = 6.74, p < .001, adj. R2 = .03.
Table 7. Results for Models Predicting Perceived Credibility Using Stepwise Regression
Table 7, left columns, show results from eight antecedents (4 personal factors + 4 message properties). Table 7, right columns, show results from all 15 psychometric traits (5 Big 5 personality traits and 10 basic human values). Significant results in Table 7 (and Table 8) are highlighted after being corrected by the false discovery rate (FDR) [3].
We then used a stepwise regression model to predict Perceived Credibility with 15 psychometric traits (Big 5 personality traits and 10 basic human values), controlling for Age and Gender. The VIFs showed that multicollinearity was not a problem. The model resulted in the two human value traits of Achievement and Benevolence showing a significant relationship with Perceived Credibility, again explaining a small amount of variance (Table 7, right columns). Controls of Age and Gender were not significant.
Next, we tested a model with Perceived Credibility as a dependent variable, adding the significant psychometric traits of Achievement and Benevolence to the significant trust antecedents of Feelings and Content Quality (Table 8), controlling for Age and Gender. We used a GLM to build the model. The trust antecedents and Achievement were significant, and Benevolence showed a strong trend for significance. As with the other models, Age and Gender are not significant. Adding Achievement and Benevolence to the model only slightly improves the adjusted R-square, and this model explains about 8% of the variance for Perceived Credibility. Since all three models in Tables 7 and 8 used the same dependent variable, we applied a False Discovery Rate correction [3] to control for the probability that one or more Type I errors could occur.
Table 8.
Perceived Credibility    
PredictorsEstimateStd Errortp
Intercept3.70.477.93.001
Feelings–.74.29–2.59.01
Content Quality–1.68.40–4.19.001
Achievement.97.362.71.007
Benevolence.87.481.83.07
Age (18–24).22.31.71.48
Age (25–34).22.25.88.38
Age (35–54).34.251.32.19
Gender (Female).17.151.16.25
Table 8. Results for GLM Model Predicting Perceived Credibility, Including Significant Trust Antecedents and Psychometric Traits
F(10,388) = 4.30, p < .001, adj. R2 = .08. The reference category for Age is 55+ and the reference category for Gender is male.
Summary of Results for RQ1: Perceived Credibility. In summary, we found that perception of credibility is significantly related to both the personal factor of Feelings and the message factor of Content Quality. Adding the psychometric traits of Achievement and Benevolence to the personal and message factors slightly improves the fit of the model. As discussed in the next section, considering various personal factors in addition to message properties can serve as a basis to help design future interactive intelligent systems that foster personalized trust building between message authors and readers.

5.3 Factors Predicting Willingness to Act

As mentioned earlier, four factors, called personal motivators, extracted from the participants' responses explained their willingness to act on a post (see Table 6). Among these factors, Benefit was the most often-reported factor by the participants to take an action. For example, one participant who was willing to share the mango post (Figure 1(b)) wrote “These things are significantly important, I feel that I owe it to the world doing things like these,” while the other stated “I would probably share this post on my social media because this is what my friends might be interested in.”
Seeing potential benefits is also the top reason why most participants (66%) were willing to leave an email for future contact: “I would appreciate being contacted through email. I'm doing surveys for money and every cent counts.” On the other hand, a lack of benefits was a salient reason for the participants not to take an action. For example, we were puzzled by the participants' low willingness to share a post, especially when they were not required to carry out the actual action. The participants' responses revealed the cause: they saw little benefit to sharing a post on social media. For example, one participant who was unwilling to share the mango post responded “I think that my friends already know this information.” Likewise, another participant who was unwilling to share the post (hugging releases oxytocin) commented: “My friends in particular would most likely not find this particular article useful.”
Cost was always cited as a reason behind people's unwillingness to act. One type of perceived cost was risking one's reputation: “I wouldn't want to share it with my social networks. I would not want my friends and followers to think I was spamming them.” Another type of cost is risking one's privacy. For example, a participant who was unwilling to leave a contact email responded “I will not give out my personal information.” Given the participants' goals on Mechanical Turk and the immediate effect of our Donate action, the most cited cost is financial. For example, many participants who were unwilling to donate stated “I can't donate at this time because I need all the money I can get.”
Similar to Cost, Belief and Habit motivated people more to decline rather than take an action. For example, one participant who was unwilling to share noted “I don't fully believe this post so I wouldn't share on my Facebook page.” Another participant who was unwilling to donate stated “The post sounds like bunk to me. I don't want to donate.” Likewise, Habit also motivated people not to take an action. For example, one participant noted “I would not share this information because I don't usually share health-related information with my followers.” The other who was unwilling to be contacted for further information responded “I don't care about these kinds of posts. So no thanks.”
In summary, our findings suggested that Benefit helps elicit people's actions on social media messages while the other three motivators (Cost, Belief, and Habit) are often used as deterrents for taking any actions. In other words, people would need to see the potential benefits and none of the deterrents before they can act on a piece of online information.
Quantitative Analysis of Personal Motivators on Willingness to Act. We investigated the relationships between the four personal motivators and the participants' willingness to act. We used the four personal motivators (Benefit, Cost, Belief, Habit) as independent variables to test their relationship with one's willingness to perform each action (Share, Contact, and Donate). Because willingness to perform each action was a categorical dependent variable, we used an ordinal regression model,6 controlling for Age and Gender. For all models, the VIFs that showed multicollinearity was not a problem.
Tables 9(a–c) summarize the results. As shown, each model with the four motivators for predicting Share, Contact, and Donate was significant, explaining 20%, 12%, and 21% of the variance, respectively. Different motivators showed significance in relation to different actions. In particular, the personal motivators Benefit and Habit significantly were related to the Share action, and Age group 35–54 was significant. Cost and Belief had a significant effect on the Contact action and both the controls of Age (25–34) and Gender were significant. Females were significantly more likely to provide contact information than males (Wald = 3.72, df = 1, p < .05). Cost, Belief, and Benefit were significant factors to predict the Donate action, and control variables of Age and Gender were not significant. It is interesting to note that females were more willing to be contacted in our study, opposite to previous findings showing that females guard their private information more tightly [31].
Quantitative Analysis of Perceived Credibility on Willingness to Act. As mentioned earlier, we hypothesized that one's perception of credibility of a message is also associated with one's willingness to act on a post with the acceptance of potential risks. We first plotted the relationship more precisely between a person's perception of credibility of a post and willingness to act on the post. For comparison purposes, a person p's perceived credibility of post m is normalized between 0 and 1; and willingness to act on a post m is computed by:
\begin{equation} WillingnessToAct\left( {p,m} \right) \approx \frac{{\mathop \sum \nolimits_{i = 1}^N Willingness\left( {{a_i}} \right) \times Risk\left( {{a_i}} \right)}}{{\mathop \sum \nolimits_{i = 1}^N Risk\left( {{a_i}} \right)}} \end{equation}
(1)
Here, action ai is the i-th action, Willingness() returns 0 or 1, indicating one's willingness to take action ai. Function Risk() estimates the risk of taking an action. The higher the risk is, the less likely an action would occur. Thus, the risk can be estimated by the probability of the participants' willingness to take action a:
\begin{equation} Risk\left( a \right) = \log (1/P\left( {WillingnessToAct\left( a \right)} \right) \end{equation}
(2)
By Equation (2) and our study results, the risk of each action was Share 1.27, Contact 0.42, and Donate 1.85. Contact is least risky since many participants were willing to take it, while Donate is most risky as few were willing to take this action.
Based on one's normalized perceived credibility and willingness to act by Equation (1), we computed two distributions over all person-message pairs. Figure 3 plots the two distributions, which indicate the overall relationship between perception of credibility of a post and willingness to act on the post. While both distributions were bimodal, they were quite different. The distribution of willingness to act was positively skewed, whereas the distribution of perceived credibility was negatively skewed. In other words, many people inflated their expressed perceived credibility of a post since perceptions were purely subjective with no consequences. On the other hand, when actions subjected people to potential risks, it showed a more realistic level of people's willingness to act. Such comparison again shows that perceived credibility of a post alone is inadequate in explaining one's willingness to act on the post in the real world.
Fig. 3.
Fig. 3. Comparing perceived credibility with willingness to act. The X axis is the normalized value measuring one's perception of credibility or willingness to act; and the Y axis plots the normalized density (number of participant-message pairs).
Given the above understanding, we then quantified the relationship between perceived credibility and one's willingness to act. We added perceived credibility (7 levels, Table 4) as an independent variable to the four personal motivators to see whether we could improve the fit of the models for each type of action, controlling for Age and Gender. Because the dependent measure of each action was a categorical variable, we used ordinal regression. The results are summarized in Table 10.
For Share, using a Likelihood Ratio test, the addition of Perceived Credibility did not significantly improve the fit of the model compared with a model with personal motivators alone. The controls of Age and Gender were not significant. For Contact, the control variable of Age (25–34) was significant. A Likelihood Ratio test showed that the addition of perceived credibility did not significantly improve the fit of the model compared with a model with personal motivators alone, controlling for Age and Gender. For Donate, Cost and Perceived Credibility were significant and neither Age nor Gender was significant. A Likelihood Ratio test also showed that the addition of Perceived Credibility did not significantly improve the fit of the model compared with a model with personal motivators alone.
Quantitative Analysis of Additional Factors on Willingness to Act. As discussed earlier, factors that significantly impacted perception of credibility were Content Quality and personal Feelings,7 as well as two psychometric traits of Achievement and Benevolence (Table 8). To examine the relationships between such factors and willingness to act, we added these factors along with the personal motivators and perceived credibility that showed significant relationships with an action as shown in Table 10. We used an ordinal regression model to predict each type of action, controlling for Age and Gender. Multicollinearity was checked for all models and was not a problem. Tables 11(a–c) summarize the results.
A likelihood ration test was used to compare whether the addition of these independent variables in Table 11 improved the fit of the models shown in Table 10. For Share, the full model in Table 11 showed a less good fit than the best fitting model, which was in Table 9. For Contact, the model in Table 11 is not as good a fit as the best fitting model shown in Table 9. For Donate, the model in Table 11 was also not as good a fit as the best fitting model shown in Table 9.
Table 9.
(a) Predicting Share: The reference category for Age is 55+ and the reference category for Gender is male. Chi-square (10) = 80.38, p < .0001, Nagelkerke pseudo R2 = .20, –2 Log Likelihood = 792.24.
 EstimateStd. ErrorWalddfSig.
Benefit2.02.6011.3010.01
Cost.810.95.721.40
Belief.09.86.011.92
Habit−2.02.747.471.006
Age (18–24).49.431.311.25
Age (25–34).17.36.221.64
Age (35–54).80.365.001.03
Gender (Female).35.203.071.08
(b) Predicting Contact: The reference category for Age is 55+ and the reference category for Gender is male. Chi-square (10) = 46.13, p < .0001, Nagelkerke pseudo R2 = .12, –2 Log Likelihood = 825.32.
Benefit.65.601.221.27
Cost–3.02.9410.331.001
Belief–2.319.8796.9631.008
Habit–.60.68.771.38
Age (18–24).63.422.181.14
Age (25–34).80.345.581.02
Age (35–54).45.341.811.18
Gender (Female).39.203.721.05
(c) Predicting Donate: The reference category for Age is 55+ and the reference category for Gender is male. Chi-square (10) = 37.85, p < .0001, Nagelkerke pseudo R2 = .21, –2 Log Likelihood = 291.20.
Benefit1.82.943.741.05
Cost–3.961.794.911.03
Belief–3.011.543.831.05
Habit–2.251.352.761.10
Age (18–24)–.91.731.531.22
Age (25–34)–.45.53.721.40
Age (35–54)–.47.55.741.39
Gender (Female)–.07.35.041.84
Table 9. Using 4 Personal Motivators (Benefit, Cost, Belief, and Habit) to Predict Willingness to Act
Note: Significant factors are highlighted in yellow.
Thus, Benefit, Habit, and Perceived Credibility were significantly associated with Share (model in Table 10), while Cost and Belief were significantly associated with Contact (model in Table 10). The action Donate was significantly associated with Cost and Perceived Credibility in the model in Table 10.
Table 10.
(a) Predicting Share: The reference category for Age is 55+ and the reference category for Gender is male. Chi-square (11) = 136.55, p < .0001, Nagelkerke pseudo R2 = .32, –2 Log Likelihood = 815.33.
 EstimateStd. ErrorWalddfSig.
Benefit1.50.635.651.02
Cost1.811.013.241.07
Belief1.11.921.471.23
Habit–2.95.7914.101.001
Perceived Credibility.60.0851.441.001
Age (18–24).34.44.611.44
Age (25–34)–.03.37.0061.94
Age (35–54).70.373.541.06
Gender (Female).19.21.861.35
(b) Predicting Contact: The reference category for Age is 55+ and the reference category for Gender is male. Chi–square (11) = 48.84, p < .0001, Nagelkerke pseudo R2 = .13,. –2 Log Likelihood = 886.95.
Benefit.51.60.731.39
Cost–2.82.958.911.003
Belief–1.97.855.391.02
Habit–.71.681.081.30
Perceived Credibility.12.072.931.09
Age (18–24).59.421.951.16
Age (25–34).77.345.121.02
Age (35–54).42.341.531.22
Gender (Female).36.203.201.07
(c) Predicting Donate: The reference category for Age is 55+ and the reference category for Gender is male. Chi–square (11) = 42.60, p < .0001, Nagelkerke pseudo R2 = .24,. –2 Log Likelihood = 293.39.
Benefit1.71.973.111.08
Cost–3.731.814.251.04
Belief–2.481.552.551.11
Habit–2.021.352.231.14
Perceived Credibility.30.144.741.03
Age (18–24)–1.14.752.321.13
Age (25–34)–.61.541.281.26
Age (35–54)–.62.561.231.27
Gender (Female)–.17.36.231.64
Table 10. Predicating Willingness to Act by Adding Perceive Credibility to 4 Personal Motivators (Benefit, Cost, Belief, and Habit)
Note: Significant factors are highlighted in yellow and marginally significant factors in gray.
Summary of Results for RQ2: Willingness to Act on a Message Post. Overall, our results suggest that the four personal motivators (Table 6) were significantly associated with willingness to act on a message post. In particular, Benefit was a salient variable associated with Share and Habit was associated with not taking an action to Share. Belief and Cost were inversely associated with taking an action to Contact. Cost was inversely associated with taking an action to Donate. Similar to the Belief motivator, perceived credibility is associated with a person's willingness to Share and Donate. The trait of Benevolence showed a trend that it was related to one's willingness to Donate. This result seemed sensible since people high on Benevolence are concerned with the welfare of others and are more willing to contribute to helping others (Table 3).
As further discussed in Section 6, our findings could help inform developers of intelligent systems to use a combination of factors, including personal motivators and psychometric traits, to predict a user's willingness to take actions in a socio-digital context and potentially persuading or dis-persuading users to act or not to act on a piece of information.

6 Discussion

As mentioned in the introduction, our ultimate goal is to build interactive, intelligent systems that can help establish personalized trust between message authors and readers based on the personal characteristics of the readers. Here, we use message authors to broadly refer to both machines and humans who intend to communicate with a target audience (message readers) for achieving certain purposes. For example, in an emergency situation, a robot or a rescue worker may be a message author who generates evacuation instructions to help the affected human population (message readers) to act accordingly. Likewise, a human coach or a virtual agent may be a message author who generates exercise guidance to coach target users (message readers) to achieve desired behavior, such as overcoming anxiety and adhering to medical treatment.
We interpret our findings and discuss how they can inform design in the context of what is feasible today and what technological gaps lie for future applications.

6.1 Personality, Trust, and Social Media

A number of research studies have shown that personal factors, including personality traits, affect interpersonal trust (e.g., [38, 44, 51, 54]). Additionally, recent studies show that the content and form of social media posts affect people's trust on such posts [43, 48]. Based on such findings, we hypothesized that personal factors, such as personality traits, would also affect people's trust in social media posts in addition to the object-related factors, such as content and form of the posts. To validate our hypothesis, we designed and conducted the presented study.
While our study is perhaps the very first step toward the discovery of personal factors on people's trust in social media, our study results revealed several interesting findings, which were either different from previous findings or were not discovered before.
First, personal factors, such as personality trait scores, alone do not predict a person's perceptions of trust in a Facebook post, nor can they predict the person's trusting behavior. Second, certain personal factors together with certain attributes of a post predict one's perceptions of trust in a Facebook post. In particular, together, one's personality trait score for Feelings and one's perceived content quality of a post predict one's perception of trust in that post. Our finding that Content Quality is significant is similar to what was found in prior studies [24, 43]. One explanation for why Feelings was significant is that Feelings measures one's state of consciousness, such as that induced by emotions and sentiment, just like Perceived Credibility represents one's subjective interpretations of a post.
Additionally, two personal factors, Achievement and Benevolence, could serve as a moderator and significantly improve the fit of prediction models. As shown in Table 3, Achievement values personal success through demonstrated competence, while Benevolence measures how much one values enhancing or preserving the welfare of those in close contact (e.g., family). Our finding thus suggests that achievement-driven people who also voluntarily have concern for others' welfare tend to have more confidence in the social media posts, which may also imply that such people may need to exercise extra caution when digesting social media content.
Third, our study also measured trust behavior—willingness to take actions based on belief despite the risks involved in addition to measuring perception of trust in a social media post. This is because one's perception of trust is subjective and inadequate to measure a trust construct [36]. Toward this goal, we measured willingness to act involving three types of risks: (1) sharing a post with the risk of sacrificing one's reputation (Share), (2) giving out one's personal email for future contact on the topic of post with the risk of being spammed (Contact), and (3) donating an earned bonus to further the research on the post topic by risking a financial loss (Donate).
It is worth noting that our study is the first to reveal the quantitative relationship between one's perception of trust and willingness to take actions (Figure 3), which indicates that perception of trust is not always aligned with willingness to take actions based on the perception. Moreover, our findings indicate that personal motivators, such as Benefit and Cost, significantly outweigh one's perception of trust in predicting one's willingness to act (Tables 911). These findings differ from those found in previous studies, in which perceived credibility is a stronger predictor than Benefit or Cost to influence consumer actions at e-commerce sites [26]. Our results imply that people might be motivated by different factors, for example, Benefit and Cost, beyond the perceived credibility of messages, in a less curated context such as social media. Our studies also imply that personal characteristics, such as Benevolence, might affect one's willingness to act regardless of one's perceived credibility in the context of social media.
Table 11.
(a) Predicting Share: The reference category for Age is 55+ and the reference category for Gender is male. Chi-square (15) = 144.78, p < .0001, Nagelkerke pseudo R2 = .34, –2 Log Likelihood = 814.837
 EstimateStd. ErrorWalddfSig.
Perceived Credibility.56.0943.851.001
Benefit1.39.644.761.03
Habit–2.93.8013.561.001
Age (18–24).21.44.231.63
Age (25–34)–.11.37.091.77
Age (35–54).63.372.891.09
Gender (Female).16.21.551.46
(b) Predicting Contact: The reference category for Age is 55+ and the reference category for Gender is male. Chi–square (15) = 58.18, p < .0001, Nagelkerke pseudo R2 = .15, –2 Log Likelihood = 882.581.
Cost–3.12.9610.651.001
Belief–2.17.866.331.01
Age (18–24).45.431.121.29
Age (25–34).70.344.231.04
Age (35–54).38.341.271.26
Gender (Female).30.212.061.15
(c) Predicting Donate: The reference category for Age is 55+ and the reference category for Gender is male. Chi–square (15) = 47.06, p < .0001, Nagelkerke pseudo R2 = .26, –2 Log Likelihood = 288.93.
Cost–4.321.875.341.02
Age (18–24)–1.35.773.051.08
Age (25–34)–.65.541.431.23
Age (35–54)–.65.571.331.25
Gender (Female)–.24.36.4441.51
Table 11. Predicating Willingness to Act by Adding Significant Factors That Predicted Perceived Credibility (Content Quality, Feelings, Achievement, and Benevolence) to the 4 Personal Motivators (Benefit, Cost, Belief, and Habit)
Note: Only significant factors are included, highlighted in yellow and marginal factors in gray.
In general, our findings imply that personality factors (e.g., Feelings, Achievement, and Benevolence), are associated with not only perceptions of trust in social media posts (in our case, Facebook posts), but personal motivators, such as Benefit and Cost, are also associated with one's willingness to take an action based on the post despite the risks involved. In other words, such personal factors in general could be used to encourage or discourage one's perception of as well as behavior on social media. On the other hand, since personal factors alone could not predict one's perceptions of trust nor willingness to take actions, other object-related factors, such as perceived content quality, could be used to persuade or dis-persuade one's trust in social media.
Nevertheless, accurately measuring personal factors ubiquitously (e.g., all social media readers), such as one's personality traits, is not an easy task. Traditional personality models and their item-based tests have their own set of challenges [14, 37]. Recent research shows the promise of automatically deriving personal factors directly from communication text (e.g., [63]) and some initial evidence on the effective use of such inferred personality traits [60]. It would be interesting to investigate how the new methods could be used to derive personal factors and how such derived personal factors would be associated with one's trust on social media.
In the next section, we discuss how our findings could be used to help both readers and authors of social media to establish trustworthy relationships.

6.2 Guide Message Authors to Build Trust with Readers

Previous studies suggest that message authors should pay attention to message properties, such as topic, content, and username, to build better credibility and trust with their audience [15, 43, 48]. Besides message properties, our study found that a set of personal factors, such as personal knowledge and personality traits, are associated with people's trust in online information. We believe that such factors could be used to guide message authors to build better trust with their intended readers.
To build personalized trust, an intelligent system first must automatically infer a reader's personal characteristics, such as personal knowledge and personality traits. Although inferring one's knowledge and experience is still very challenging, recent research has shown early promise. For example, Luo et al. have estimated a person's knowledge about a topic in an enterprise [29], while Pan et al. [49] have inferred one's expertise in a particular domain using Social Q&A information. Furthermore, recent work has shown the feasibility of automatically inferring personality traits [1, 18, 63] and basic values [9] from a user's communication text. Additionally, research shows that such automatically inferred personal traits could beat human assessments [62] or predict real-world user behavior better than humans or other self-reported traits can [60, 32, 18]. However, automatically inferring one's diverse characteristics, especially context-relevant characteristics (e.g., knowledge and experience related to a particular health care or political context), is still very challenging since much knowledge—domain-specific, fine-grained knowledge—must be acquired for making such inferences.
On the other hand, research shows that messages tailored to characteristics have more persuasion power [8]. Assume that an intelligent system could accurately learn a user's characteristics as mentioned above. Such a system could then potentially guide message authors to tailor their messages to the intended readers based on the readers' inferred characteristics, building better trust with the readers. For example, knowing a reader's knowledge and experience, the system may specifically highlight how an intended message is relevant to the reader's knowledge and experience. Likewise, knowing a reader's personality traits or basic values could allow the system to help message authors tailor their messages to their readers based on the readers' values or personality traits.
For example, our findings indicate that the personal motivator Benefit is associated with willingness to take actions. If the system could infer that a reader values others' welfare (i.e., scoring high on Universalism in Table 3), it could communicate to the reader how acting on a message (e.g., following an evacuation order or overcoming an adverse behavior) may greatly benefit others and the society as a whole. On the other hand, our findings suggest that Cost could be a factor that deters people from taking actions. In such cases, the system could potentially warn a user about the cost or risks involved in taking an action based on the user's unique characteristics. For example, a message that explains potential cost using logic and evidence helps build trust with readers who rely on logic and inference to assess the credibility of information, while a message featuring an authentic story to outline potential risks might win over audiences who base their trust on feelings and intuition. Similar to what is mentioned above, the main technology gap still lies in a system's abilities to accurately infer characteristics and automatically adapt a message to the reader based on that reader's characteristics in context. Such abilities require a system to possess a rich amount of knowledge to decide what to communicate and how.

6.3 Guide Message Readers to Make Trust Judgment

Making better trust judgment is important and sometimes critical. For example, people's trust judgment on emergency evacuation orders might be a matter of life and death. However, studies show that people are poor at judging the credibility of information by content alone [43, 55]. Given the relationship of personal factors with trust, an interactive, intelligent system may help users better assess the credibility of information based on their own characteristics. Our study provides a basis for others to build on to continue to explore how a user's personal characteristics can be incorporated into intelligent systems that could potentially guide the user to make a better assessment. For example, if a user is feeling-based and action-oriented, it might remind the user to be more objective by checking relevant facts and comparing Benefit and Cost before acting.
Another area for exploration is how a system, based on a person's characteristics, can also teach a reader to use proper cognitive heuristics to judge the credibility of information [41]. For example, the system may suggest to a naturally skeptical reader to use a persuasive intent heuristic to prevent biases in assessing the credibility of a piece of information. In contrast, it may recommend an intuition-based reader to use an endorsement or consistency heuristic to validate the information by checking others' comments or the consistency of the information across different sources.
Our findings also differentiated trust elicitors (e.g., a Benefit motivator) from trust deterrents (e.g., a Habit motivator). Future work can explore how to build intelligent systems that could potentially help users make a balanced judgment through incorporating trust elicitors and deterrents. For example, a system could highlight trust deterrents for users who are agreeable and trusting in nature, while emphasizing trust elicitors to those who are naturally skeptical. As mentioned above, recent advances in automatically inferring one's personal characteristics have already made the above scenarios feasible.
Our study also suggests that a training corpus could be valuable for inferring additional user characteristics. For example, phrases such as “I feel” and “I think” appeared frequently in the responses that indicated user perception of credibility based on feelings. Future research could explore how such language patterns could be used to infer user thinking style. With such capability, a system could potentially automatically infer a user's personal characteristics and predict that user's trust in a given message, and then guide the user to make more effective trust judgments.
Again, building intelligent systems that are capable of performing the functions described here is still a big challenge since such systems also require a deep understanding of their users and the relevant context (e.g., helping readers assess the credibility of a health care–related post might be very different from assessing that of a politics-related post). As machines are powered with richer knowledge and more powerful inference capabilities, the systems we have envisioned will become more feasible to help us humans make more informed decisions.

6.4 Limitations

Here, we discuss the limitations of our current study and future work. Our study is a first step toward developing an understanding of how personal characteristics affect one's trust in online information. It has several limitations that we describe next.
Participants. Since our study focused on people's trust in Facebook posts and 68% of American adults use Facebook,8 we recruited our participants on Amazon Mechanical Turk, hoping that they were representing a diverse population of Facebook users. However, studies show that participants were less representative of American Internet users or general social media users [2, 4, 7, 28]. Thus, it is unknown how well our participants represented Facebook users. Moreover, our current study did not collect information about the participants' Facebook usage behavior, for example, a very active user who posts on Facebook frequently versus a lurker who reads Facebook posts occasionally. Such usage behavior might bias one's trust in Facebook posts. In other words, our results might be limited by the opinions of the current participants and may vary for a different population of Facebook users.
Scope of Findings. Our study used message content from Facebook. While our findings can potentially be generalized for many social media platforms, further research is needed to see to which platforms they can be extended. For example, not many of our participants were willing to share a post on Facebook. Their behavior might be different if they were asked to share a message on another platform, such as retweeting a message on Twitter. This is because the two situations are very different: sharing a post in one's friend network (Facebook) versus sharing it publicly with mostly strangers (Twitter). Moreover, our participants mostly relied on the text in a post to make decisions. What if there is little text in a post, such as with those on Instagram? What would affect their trust in such posts? Although our findings were consistent across diverse topics tested, it is unclear whether drastically different topics, such as emergency warnings or polarized political opinions, would affect people's trust differently than it did in our study. Nevertheless, given the popularity of Facebook, which is the most widely used social media platform for Millennials [45], we feel that investigating trust in Facebook messages is a good first step.
Also, we did not control for a participant's level of interest in the post and the number of reactions that the post received. This could have created biases in the perceived credibility of the post.
Moreover, our study focused on understanding people's initial trust in social media. Since trust is often developed over time and can also change [24], our study did not address temporal aspects. For example, what would change a person's initial perception of credibility of a post? How would the changes in one's personal traits, such as knowledge and experience, affect the perception of credibility or willingness to act over time? To answer these questions, longitudinal studies would be required to observe people's trusting behavior over time.
In sum, our study is a first step toward understanding how personal factors are associated with trust and further study is needed for wider generalizability. Moreover, we note that there are gaps between our findings in this study and their potential applications in the real world as we have envisioned.
Threat to Validity of Models. This article reports on exploratory analyses of factors associated with people's perceptions of credibility of Facebook posts and willingness to take actions on such posts. The present study has several limitations due to its design, which might affect the validity of the models and results.
First, our study used open-ended questions to elicit participants' perceptions of trust in Facebook posts and then manually analyzed all open-ended answers to extract the perceived credibility of each post as well as the rationales behind the perceived credibility. While the open-ended questions elicited rich information from the participants, it would be ideal to also extract quantitative trust ratings, such as using the quantitative trust scale to measure trust from several aspects [36]. The quantitative trust scale would had provided us with more comprehensive information on the perceived credibility of a Facebook post, which could be used to corroborate participants' rationales as well as facilitate analyses.
Second, since there is little prior research on which personal factors would affect people's trust on social media, especially Facebook posts, we did not have a priori hypotheses as to which variables might play a role. We thus performed exploratory analyses by testing a number of variables, including personal factors, message properties, and psychometric scores. Because we tested multiple hypotheses with Perceived Credibility, we needed to control for Type I errors, which we did using a False Discovery Rate correction.
Third, our current analytics use regression models to understand the mathematical relationships between a number of independent and dependent variables. However, such models cannot assess causality. For example, though Feelings is a personal factor shown to be significantly associated with perceived credibility, there could be any number of underlying covariates involved. Therefore, we cannot claim causality of any of the variables—such as personal factors, message properties, personal motivators, and psychometric measures—without conducting controlled experiments and additional analyses.
Coarse Measuring of Personal Characteristics. Our findings indicated that innate traits, such as personality and basic values, did not have much impact on one's trust, as we had hypothesized. One reason might be that our study used rather coarse psychometric tests to measure traits. For example, we used the IPIP-50 to measure Big 5 Personality Traits. It is possible that a more rigorous psychometric test, such as IPIP-300 or NEO PI-R that derives Big 5 plus 30-faceted personality traits might make a difference. To address this limitation, we would need to use a finer-grained psychometric assessment.
Moreover, the personal factors extracted from the participants' responses, such as personal knowledge and experience, were currently modeled as a binary value (presence vs. absence) based on the participants' responses. Such modeling is quite limited since it is not a rigorous or objective measure of characteristics, such as actual level of domain knowledge or prior experience related to a social post used in our study. Thus, we do not know the fine-grained relationships between user level of personal knowledge or experience and perception of credibility or willingness to act. Finer-grained and objective modeling of these factors may also help us better understand how they might bias user trust in online information.
In addition, the participants might have mentioned just one or two factors in their responses. Since it is difficult to extract all factors that might have been associated with one's trust in a social media post, in this study we obtained only the factors that explicitly appeared in the participants' responses. In other words, our study considered only the top factors mentioned by the participants and there might be additional factors affecting their perceptions that they did not disclose.
Assessing Risks in Willingness to Act. Our action-based trust study was the first to examine user perception of credibility in social media and willingness to act on such information. To instrument a realistic context, we had to design a set of actions bearing realistic, measurable risks. Except for Donate, for which risk was easy to measure, it was hard to assess the risks associated with Share and Contact. For example, we did not know how much risk there really was when a participant left an email address for future contact. Was the provided email a frequently used personal account or a general account? Assessing the risks of Share is also hard as additional factors may affect the level of risks (e.g., sharing a post with a few close friends vs. a big group of people). Further, we could not follow up to check that a participant had actually shared a post. Therefore, we cannot ascertain that Share was actually a behavior but rather we can only claim that participants stated their intention to share a post. Albeit difficult, the ability to better assess the associated risks of various actions would make our findings better grounded. In future work, we need to instrument better risk-taking actions for more effective action-based trust studies. Also, because participants are performing tasks to earn money, this sample could influence their willingness to donate money. In future work, we will examine other populations.

7 Conclusions

We have presented a behavior-based trust study involving 400 participants. It examined their trust in social media by (1) their perceptions of credibility (believability) of a Facebook post, and (2) their trusting behavior associated with the post. Our study has revealed several new and valuable findings. First, certain personal factors (e.g., personal knowledge and basic values) are associated with one's perception of credibility in social media beyond message properties discovered previously. Incorporating personal factors such as Feelings helps produce a better fitting model of perception of credibility than using the message factor of Content Quality alone. Second, although perception of credibility is significantly related to trusting behavior, it has less prediction power than personal motivators (e.g., Benefit). This implies that people on social media need motivations to act on these messages beyond just perceiving the credibility of messages. Third, a combination of factors, including personal experience and personal motivators, has an even greater power in explaining willingness to act. Given the relationships between personal factors and trust in social media, we have also discussed their potential applications to building interactive, intelligent systems that can personalize trust development between message authors and readers.

Acknowledgments

The original study was conducted while most of the authors were with IBM.

Footnotes

3
On Mechanical Turk, participants are often multi-tasking and performing many micro-tasks at the same time The longer a task gets, the less attentive a participant becomes to the task.
4
Not every post was suitable for us to ask the participants whether they would donate for further research. We chose three that made sense (i.e., the two Science posts and the Mango health post).
5
For each participant we averaged their responses of perceived credibility (1–7) over the three posts that they assessed. Std Est is the standardized beta coefficient. Table 7, left columns, show results from eight antecedents (4 personal factors + 4 message properties). Table 7, right columns. show results from all 15 psychometric traits (5 Big 5 personality traits and 10 basic human values). Significant results in Tables 7 and 8 are highlighted after being corrected by the false discovery rate (FDR) [3].
6
Each type of action is a four-level (0–3) value that summed a participant's willingness to perform the action over each of the three posts assessed by the participant.
7
Here, “Feelings” measures a participant's emotionality, how easy a person is to get emotional or be affected by emotions.

References

[1]
S. Adali and J. Golbeck. 2012. Predicting personality with social behavior. In IEEE Computer Society.
[2]
T. S. Behrend, D. J. Sharek, A. W. Meade, and E. N. Wiebe. 2011. The viability of crowdsourcing for survey research. Behavior Research Methods 43, 3 (2011), 800–813.
[3]
Y. Benjamini and Y. Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological) 57, 1 (1995), 289–300.
[4]
A. J. Berinsky, G. A. Huber, and G. S. Lenz. 2012. Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Political Analysis 20, 3 (2012), 351–368.
[5]
A. Bruns and Y. E. Liang. 2012. Tools and methods for capturing Twitter data during natural disasters. First Monday 17, 4.
[6]
K. R. Canini, B. Suh, and P. L. Pirolli. 2011. Finding credible information sources in social networks based on content and social structure. In SocialCom'11, 1–8.
[7]
K. Casler, L. Bickel, and E. Hackett. 2013. Separate but equal? A comparison of participants and data gathered via Amazon's MTurk, social media, and face-to-face behavioral testing. Computers in Human Behavior 29, 6 (2013), 2156–2160.
[8]
J. Chen, E. Haber, R. Kang, G. Hsieh, and J. Mahmud. 2015. Making use of derived personality: The case of social media ad targeting. In 9th International AAAI Conference on Web and Social Media.
[9]
J. Chen, G. Hsieh, J. Mahmud, and J. Nichols. 2014. Understanding individuals' personal values from social media word use. In CSCW'14. 405–414.
[10]
K. Cook, C. Cheshire, A. Gerbasi, and B. Aven. 2009. Assessing trustworthiness in providers of online goods and services. In Etrust: Forming Relationships in the Online World. K. Cook, C. Snijders, V. Buskins, and C. Cheshire (Eds.). Russell Sage Foundation, New York. 189–214.
[11]
J. M. Digman. 1990. Personality structure: Emergence of the five-factor model. Annual Review of Psychology 41, 1 (1990), 417–440.
[12]
J. R. Dunn and M. E. Schweitzer. 2005. Feeling and believing: the influence of emotion on trust. Journal of Personality and Social Psychology 88, 5 (2005), 736.
[13]
P. Dwyer. 2007. Building trust with corporate blogs. In ICWSM'07.
[14]
J. Fan, D. Gao, A. Carroll, S. Lopez, J. Siva, T. Tian, and H. Meng. 2012. Testing the efficacy of a new procedure for reducing faking on personality tests within selection contexts. The Journal of Applied Psychology 97, 866–880, 2012. https://doi.org/10.1037/a0026655.
[15]
B. J. Fogg, J. Marshall, O. Laraki, A. Osipovich, C. Varma, N. Fang, J. Paul, A. Rangnekar, J. Shon, P. Swani, and M. Treinen. 2001. What makes web sites credible? A report on a large quantitative study. In Proceedings of CHI'01, 61–68.
[16]
J. Golbeck (Ed.). 2008. Computing with Social Trust. Springer Science and Business Media.
[17]
K. M. Goldstein and S. Blackman. 1978. Cognitive styles: Five approaches and relevant research. Wiley & Sons.
[18]
L. Gou, M. X. Zhou, and H. Yang. 2014. KnowMe and ShareMe: Understanding automatically discovered personality traits from social media and user sharing preferences. In Proceedings of CHI'14: 955–964.
[19]
R. Hardin. 2004. Trust and Trustworthiness. Russell Sage Foundation, New York.
[20]
S. Holm. 1979. A simple sequential rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70.
[21]
H. Hsieh and S. Shannon. 2005. Three approaches to qualitative content analysis. Qualitative Health Research 15, 9 (2005), 1277–1288.
[22]
P. Ipeirotis. 2010. Analyzing the Amazon Mechanical Turk marketplace. XRDS: Crossroads, The ACM Magazine for Students 17, 2 (2010), 16–21.
[23]
O. P. John, E. M. Donahue, and R. L. Kentle. 1991. The Big Five Inventory v4a and 54. Berkeley: UC Berkeley, Institute of Personality and Social Research.
[24]
T. J. Johnson and B. K. Kaye. 2011. Hot diggity blog: A cluster analysis examining motivations and other factors for why people judge different types of blogs as credible. Mass Communication and Society 14, 2 (2011), 236–263.
[25]
B. Kang, J. O'Donovan, and T. H¨ollerer. 2012. Modeling topic specific credibility on twitter. In Proc. IUI' 2012, 179–188.
[26]
D. J. Kim, D. L. Ferrin, and H. R. Rao. 2008. A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decision Support Systems 44, 2 (2008), 544–564.
[27]
J. D. Lewis and A. Weigert. 1985. Trust as a social reality. Social Forces 63, 4 (1985), 967–985.
[28]
K. E. Levay, J. Freese, and J. N. Druckman. 2016. The demographic and political composition of Mechanical Turk samples. SAGE Open 6, 1 (2016), 2158244016636433
[29]
L. Luo, F. Wang, M. X. Zhou, Y. Pan, and H. Chen. 2014. Who have got answers?: Growing the pool of answerers in a smart enterprise social QA system. IUI 2014, 7–16.
[30]
N. Luhmann. 1979. Trust and Power: Two Works. Wiley.
[31]
M. Madden. 2012. Privacy management on social media sites. Pew Internet Report, 1–20.
[32]
J. Mahmud, M. X. Zhou, N. Megiddo, J. Nichols, and C. Drews. 2013. Recommending targeted strangers from whom to solicit information on social media. In Proceedings of the 2013 International Conference on Intelligent User Interfaces. ACM, 37–48.
[33]
L. Martin. 1989. The cognitive-style inventory. In The 1989 Annual: Developing Human Resources. University Associates, 123–139.
[34]
S. Marsh and M. R. Dibben. 2003. The role of trust in information science and technology. Annual Review of Information Science and Technology 37, (2003) 465–498.
[35]
W. Mason and S. Suri. 2012. Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods 44, 1 (2012), 1–23.
[36]
R. C. Mayer, J. H. Davis, and F. D. Schoorman. 1995. An integrative model of organizational trust. Academy of Management Review 709–734.
[37]
Dan P. McAdams. 1992. The five-factor model in personality: A critical appraisal. Journal of Personality 60, 2 (1992), 329–361.
[38]
D. J. McAllister. 1995. Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal 38, 1 (1995), 24–59.
[39]
P. McCole and A. Palmer. 2002. Transaction frequency and trust in Internet buying behavior. Irish Mar. Rev. 15, 2.
[40]
D. H. McKnight, V. Choudhury, and C. Kacmar. 2002. Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research 13, 3 (2002), 334–359.
[41]
M. Metzger and A. Flanagin. 2013. Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics 59, 210–220.
[42]
T. Mitra and E. Gilbert. 2015. CREDBANK: A large-scale social media corpus with associated credibility annotations. In ICWSM'15.
[43]
M. R. Morris, S. Counts, A. Roseway, A. Hoff, and J. Schwarz. 2012. Tweeting is believing?: Understanding microblog credibility perceptions. In Proceedings of CSCW'12, 441–450.
[44]
T. Mooradian, B. Renzl, and K. Matzler. 2006. Who trusts? Personality, trust and knowledge sharing. Management Learning 37, 4 (2006), 523–540.
[45]
Nielsen Company. 2014. Millennials: Breaking the myths. 2014. Retrieved July 15, 2021 at http://www.nielsen.com/us/en/reports/2014/millennials-breaking-the-myths.html.
[47]
S. Tavernise. 2016. “As fake news spreads lies, more readers shrug at the truth.” New York Times, 2016. http://www.nytimes.com/2016/12/06/us/fake-news-partisan-republican-democrat.html.
[48]
A. Pal and S. Counts. 2011. What's in @name? How name value biases judgment of microblog authors. In ICWSM'11.
[49]
S. Pan, E. Mayfield, J. Lu, and J. Lai. 2015. Signals of expertise in public and enterprise social Q&A. In ICWSM'15.
[50]
S. Y. Rieh and D. R. Danielson. 2007. Credibility: A multidisciplinary framework. Annual Review of Information Science and Technology 41, (2007) 307–364.
[51]
J. B. Rotter. 1980. Interpersonal trust, trustworthiness, and gullibility. American Psychologist 35, 1 (1980), 1.
[52]
S. H. Schwartz. 2007. Value orientations: Measurement, antecedents and consequences across nations. In Measuring Attitudes Cross-nationally: Lessons from the European Social Survey. 169–203.
[53]
S. H. Schwartz. 2012. An overview of the Schwartz theory of basic values. Online Readings in Psychology and Culture 2, 1 (2012), 11.
[54]
B. R. Schlenker, B. Helm, and J. T. Tedeschi. 1973. The effects of personality and situational variables on behavioral trust. Journal of Personality and Social Psychology 25, 3 (1973), 419.
[55]
Stanford History Education Group. 2016. Evaluating information: The cornerstone of civic online reasoning. Retrieved July 15, 2021 at https://sheg.stanford.edu/upload/V3LessonPlans/Executive%20Summary%2011.21.16.pdf.
[56]
H. Tseng and B. J. Fogg. 1999. Credibility and computing technology. Communications of the ACM 42, 5 (1999), 39–44.
[57]
Tschannen-Moran, Megan and Wayne K. Hoy. 2000. A multidisciplinary analysis of the nature, meaning, and measurement of trust. Review of Educational Research 70, 4 (2000), 547–593.
[58]
C.-C. Wang, C.-A. Chen, and J.-C. Jiang. 2009. The impact of knowledge and trust on e-consumers' online shopping activities: An empirical study. Journal of Consumer Psychology 4, 11–18.
[59]
Y. Wang and G. Mark. 2013. Trust in online news: comparing social media and official media use by Chinese citizens. In Proceedings of CSCW'13, 599–610.
[60]
Z. Xiao, M. X. Zhou, and W. T. Fu. 2019. Who should be my teammates: Using a conversational agent to understand individuals and help teaming. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 437–447.
[61]
J. Yang, S. Counts, M. R. Morris, and A. Hoff. 2013. Microblog credibility perceptions: comparing the USA and China. In Proceedings of CSCW'13. 575–586.
[62]
W. Youyou, M. Kosinski, and D. Stillwell. 2015. Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences 112, 4 (2015), 1036–1040.
[63]
M. X. Zhou, G. Mark, J. Li, and H. Yang. 2017. Trusting virtual agents: Effect of personality. In the Special Issue of IUI 2017 Highlights, ACM TiiS 9, 2–3.

Cited By

View all
  • (2023)Understanding Motivational Factors in Social Media News Sharing DecisionsProceedings of the ACM on Human-Computer Interaction10.1145/35795387:CSCW1(1-30)Online publication date: 16-Apr-2023
  • (2021)Path planning of spot welding robot based on multi-objective grey wolf algorithmJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-20281041:6(6181-6189)Online publication date: 1-Jan-2021

Index Terms

  1. Building Personalized Trust: Discovering What Makes One Trust and Act on Facebook Posts

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Social Computing
        ACM Transactions on Social Computing  Volume 4, Issue 3
        September 2021
        129 pages
        EISSN:2469-7826
        DOI:10.1145/3485146
        Issue’s Table of Contents

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 08 October 2021
        Accepted: 01 May 2021
        Revised: 01 April 2021
        Received: 01 June 2018
        Published in TSC Volume 4, Issue 3

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Personalized trust
        2. action-based trust
        3. social media
        4. Facebook
        5. personality
        6. perceptions of credibility
        7. willingness to act
        8. trusting behavior
        9. intelligent agent systems

        Qualifiers

        • Research-article
        • Refereed

        Funding Sources

        • Air Force Office of Scientific Research

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)540
        • Downloads (Last 6 weeks)49
        Reflects downloads up to 01 Mar 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2023)Understanding Motivational Factors in Social Media News Sharing DecisionsProceedings of the ACM on Human-Computer Interaction10.1145/35795387:CSCW1(1-30)Online publication date: 16-Apr-2023
        • (2021)Path planning of spot welding robot based on multi-objective grey wolf algorithmJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-20281041:6(6181-6189)Online publication date: 1-Jan-2021

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Login options

        Full Access

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media