The value of distrust

https://doi.org/10.1016/j.jesp.2008.05.003Get rights and content

Abstract

We assume that a state of distrust is the mental system’s signal that the environment is not normal—things may not be as they appear. Hence, individuals sense they should be on guard. In particular, they are likely to avoid routine strategies, ones proven to be optimal and regularly used in normal environments, because these strategies are easily anticipated by whoever may be seeking to deceive them. Conversely, a state of trust is associated with a feeling of safety. The environment is as it normally is and things really are as they appear to be. Thus, individuals see no reason to refrain from doing what they routinely do. Accordingly, we hypothesize that figuring out a new situation depends on the type of environment and the actor’s state of mind: in normal environments, where routine strategies are optimal, individuals who trust should outperform those who distrust; however, in unusual environments, where non-routine strategies are optimal, individuals who distrust should outperform those who trust. This paper reports three experiments that manipulate distrust via orienting tasks that participants perform prior to attempting to predict a series of events (Experiments 1 and 2) or solve matchstick arithmetic problems (Experiment 3). Performance success depends on discovering and implementing an appropriate rule. We found that, as predicted, the manipulation of distrust sensitized participants to the existence of non-routine contingencies, that is, contingencies that were not expected.

Introduction

Distrust is a ubiquitous psychological state that arises whenever we seem unable to take appearances, typically others’ declarations and behaviors, at face value. This usually occurs because we recognize that their interests conflict with ours. Somewhat less common are instances in which we are unaware of any explicit conflict of interest but nonetheless feel uneasy, sensing that the situation is not normal, that others may behave unpredictably or cause something unexpected to happen.

The social psychological significance of distrust is best understood in comparison with its opposite, a state of trust. When protagonists trust each other, they may deliberately give each other the benefit of the doubt and assume that their interests are shared, or at least that each has the other’s interests in mind. Ordinarily, when the stakes are not high, trust is the default state, so that without thinking much about the other, individuals feel the environment is normal and there is no need to worry. Trust, therefore, entails a belief that the other’s actions will benefit the protagonist or that the situation is benign and nothing detrimental to one’s interests will occur (Robinson, 1996). On the other hand, distrust denotes a perception of vulnerability due either to fear of the other’s motives, intentions, and prospective actions, or to vague forebodings that things are not as they appear and something unpredictable may occur (Koslow, 2000, Kramer, 1999).

Typically, a state of distrust is focused—it is attached to a specific target, often a person (but also, perhaps through analogy, an organization such as a political party or an inanimate object such as a car). Focused distrust can be triggered either because one knows something about the target (e.g., motivations, intentions, or past behaviors, see review in Kramer, 1999) or because one draws inferences about the other from the nature of the situation (e.g., the presence of temptations to defect, the lack of inducements to reciprocate, or the absence of a binding commitment; e.g., King-Casas et al., 2005, Seabright, 2004, Yamagishi et al., 1998). In such cases not only does the protagonist doubt the target’s messages and actions, but the target becomes less liked (Winston, Strange, O’Doherty & Dolan, 2002). This can lead to discounting of the target’s recommendations or judgments, as well as to a tendency to avoid the target and to be on guard when interacting with him or her.

A state of distrust can also be unfocused in the sense that it is not attached to a particular source. It may reflect residual activation of previous episodes of focused distrust, or the impact of cues that are typically associated with deception (e.g., the amount of details in the message, vocal tension, or fidgeting; see DePaulo et al., 2003 for a meta-analytic review). In such cases, perceivers may not be aware of why they distrust or, even more likely, they may not even consciously experience this state. Still, we propose that states of trust and distrust, regardless of whether they are focused or unfocused, conscious or unconscious, are associated with different patterns of thoughts and actions.

In order to understand the habitual thought and action processes under trust and distrust we must examine their meaning in everyday functioning. Briefly, trust connotes safety and transparency; individuals believe there is nothing to be feared in transactions between them and others. Distrust, in contrast, is associated with the concealment of truth and a lack of transparency. It is a state of uncertainty, but not the kind associated with outcomes that are inherently probabilistic, as in playing a slot machine or roulette. Rather, distrust reflects the receiver’s perception of the source’s intention (to mislead) and, potentially, the receiver’s theory about the truth (Schul, Mayo, Burnstein, & Yahalom, 2007). This characterization is important because people are not only the targets of misinformation attempts, they are also the source of such attempts. As a result, unlike the possibility of losing to a slot machine, the threat of losing to a person engages recursive reasoning: individuals who suspect they may be deceived assume their potential deceiver will attempt to mask the deception by using knowledge acquired as successful deceivers as well as targets of deception in the past. That is, the deceiver may put on a display of routine actions, the ones each party expects of the other in normal (i.e., truthful) environments, where the problem is typically one of coordinating actions to achieve a shared goal. Therefore, unlike people who trust, those who distrust attempt to ascertain the other’s attempts at deception by searching for signs that the other’s behavior is departing from what is routine in the situation.

What are the implications of this analysis to the thought processes triggered under diffused trust and distrust? We propose that other things being equal, when a state of trust is active, one tends to believe, to follow the immediate implications of the given information. In contrast, when a state of distrust is active, one tends to search for non-obvious alternative interpretations of the given information, because distrust is associated with concealment of truth (cf., Fein, 1996, Schul et al., 1996). Thus, in distrust, the mental system become more open to the possibility that the ordinary schema typically used to interpret the ongoing situation may need to be adjusted. We propose that this pattern of thought may occur even in cases of diffuse distrust, where the situation that triggered the distrust bears no phenotypic resemblance to the current situation.

We (Schul, Mayo, & Burnstein, 2004) investigated this conjecture by comparing contexts of trust versus contexts of distrust in respect to the associative links they activated in processing messages. Our reasoning was as follows: when a source is trusted, receivers have no doubt the message is true, and automatically encode it as such. This in turn causes them to spontaneously bring to mind ‘routine’ concepts which are typically congruent with the message. When a source is distrusted, however, the message is doubted and, as a result, ‘non-routine’ concepts, typically incongruent with the message, spontaneously come to the receivers’ mind. This conjecture was tested using single words as messages and priming facilitation to indicate the associative structure activated in response to a prime word. We found, as predicted, that when a prime word appeared together with a stimulus which signaled distrust it facilitated associations that were incongruent with the meaning of the target word (e.g., dark activated light). However, when a prime word appeared in the context of trust, the prime activated associations that were congruent with it (e.g., dark activated night).

The research reported in this paper attempts to extend and generalize these findings by investigating more general and abstract phenomena. Specifically, since the Schul et al. (2004) study involved reactions to verbal information, it could be interpreted in line with either a narrow or a broad perspective. According to the narrow perspective, the facilitation of incongruent cognitions under distrust is interpreted as the tendency of individuals to resist persuasive intent. According to the broader perspective, trust and distrust are associated with different types of thought processes: under trust one turns to routine mental outcomes whereas under distrust one turns to the non-routine. According to this broader interpretation, states of trust (versus distrust) influence how people consider information and generate inferences even when linguistic processes in general, and persuasion processes in particular, are irrelevant.

By definition, ‘routine’ strategies are those that have proven most useful in an individual’s normal or typical environment and are thus most likely to be activated by default. Consider the inferences one makes about a car whose color is a shiny red. Typically, when there is no reason to distrust, perceivers make correspondent inferences, from the way the car looks to other characteristics (e.g., how well it runs). Such routine inference process makes sense under trust, where things tend to be the way they appear. However, consider the reactions of a buyer who distrusts sellers. It is quite possible that the inference might be opposite, namely, good looks might be covering something—a negative relationship between how things look and how they are.

More generally, our analysis suggests that when people attempt to impose a structure on our complex world, they develop many schemata or inference rules1 that are used to predict one attribute from others. These rules vary in their level of routine—the likelihood that they will be used in a normal environment. We hypothesize that those who trust tend to access inference rules that are more routine and prevalent than those who distrust. As a result, those who trust succeed more in making inferences in environments that are typical, but those who distrust do better in environments that are unusual, unexpected, or non-routine (see below).

To explore this hypothesis we employed experimental paradigms that make minimal use of linguistic inferences. Instead, we compare how people figure out the environment using abstract rules that vary in their level of routine. Specifically, Experiments 1 and 2 utilized the multiple-cue probability-learning paradigm, which is based on Brunswik’s lens model and Hammond’s Social Judgment Theory (see Doherty & Kurz, 1996, for a review). In this paradigm participants are given several cues (e.g., scores on different tests) on each of many trials, and asked to use them in predicting an outcome (e.g., success on a job). Once a prediction is made, participants are informed about the actual outcome, and a new trial begins. Initially, of course, participants do not know how to use the cues in making their predictions. However, over trials, based on the feedback they get, participants can, and do, discover the rule linking cues to outcome. Since the cues can be combined in different ways to determine the criterion, participants’ performance success depends on having the appropriate rule in mind.

To illustrate the nature of the rules used in Experiments 1 and 2, consider the following example. Imagine having to predict applicant’s on-the-job success (y) from two predictors: the applicant’s level of motivation (x1), and his or her level of education (x2). Consider the following two inference rules: according to one, success is linked positively to both motivation and education. We call this the positive-linear rule. According to the other rule, on-the-job success is linked positively to motivation, but negatively to education. For this reason we call it the negative-linear rule.

Which of these rules is more routine? Brehmer, 1974, Brehmer, 1980 offers a well-established method for answering this question with respect to rule-discovery and rule implementation. He compared people’s intuitions about the prevalence of positive-linear, negative-linear, and non-linear rules, as well as the ease of learning these rules. He reported that (i) participants estimated that positive-linear rules are more prevalent than negative-linear rules, which in turn were viewed as more prevalent than non-linear rules; and (ii) the ease of learning these rules followed this pattern as well. Consequently, we assume that positive-linear rules tend to be more routine, meaning that they are more likely to be accessed when people are trying to draw inferences from one attribute to another. Note that the above results should not be understood as suggesting that the positive-linear rules are highly routine in an absolute sense, meaning that people consider them frequently in all situations. Rather, Brehmer’s findings should be interpreted in a relative sense. Given that people think about linear rules, they are likely to consider hypotheses about positive-linear contingencies more readily than those about negative-linear contingencies. Note that we do not assume that rules are explicit or verbalizable. Research on implicit learning (e.g., Eitam, Hassin, & Schul, 2008) show that people can pick up regularities and act using rules based on these regularities, without being aware of the existence of the rules.

Of course, there are other factors that influence which inference rules are activated and used in any given context. Activation is influenced by factors such as chronic or recent activation and folk theories about applicability (e.g., Sedikides & Skowronski, 1991). However, other things being equal, our formulation predicts that under conditions of trust individuals will behave routinely and use the strategy they normally use. In Brehmer’s model, this means they will attempt to utilize the more routine rule. In contrast, when distrust is activated, people depart from the routine and try other (non-dominant) rules to fit the data.

Because understanding environments and making correct predictions is helped by having the right inference rules, our analysis predicts that trusting individuals should succeed more in those environments characterized by dominant rules, while those who distrust should succeed more in understanding unusual environments, which are characterized by non-dominant rules. It is important to note that while our task involves repeated trials—multiple predictions—which allow us to explore different stages of performance (predisposition, learning, and discovery), in real life many tasks involve one-shot attempts. The predisposition for routine or non-routine, may be even more critical in determining people’s actions in the single-trial case.

Section snippets

Experiment 1

Participants were asked to make predictions in one of two environments, which were constructed according to a positive-linear rule (the routine) or a negative-linear rule (the non-routine). Based on our theoretical analysis we hypothesize that under trust people tend to activate routine inference rules, while under distrust they activate non-routine rules. Therefore, it follows that distrust is an advantage when the environment is constructed according to the less routine rule (the

Experiment 2

Experiment 2 replicated the design of Experiment 1, using a different pair of inference rules that allow prediction from a pair of predictors to an outcome. According to the max rule, outcomes are affected by the maximum of the two predictors, whereas according to the min rule, they are affected by the minimum of the two (see Method section below). Assuming that x1 and x2 are positively-scaled mental constructs (e.g., motivation and ability), discovering and learning the max rule should be

Experiment 3

Trust and distrust were manipulated in this experiment by having participants process one of two faces, either a face which conveyed trust or one which conveyed distrust. Participants were asked to form an impression of the person whose face they saw, and to remember this impression. We reasoned that participants who processed the trustworthy face would be more likely to be in a trusting mode of thinking, while those who processed the untrustworthy face would be more likely to be in a distrust

General discussion

Distrust is typically viewed as a mental state caused by the threat of being deceived. Research shows that people who are in such a state behave differently toward others than do people who trust (Kramer, 1999). Since (i) everyone must deal with potentially untrustworthy individuals many times over their lifetime, (ii) these encounters are often socially or otherwise significant, and (iii) this has probably been the case throughout the history of our species as well as that of other

Acknowledgments

Preparation of this paper was supported by grants from the US-Israel Binational Science Foundation (BSF), the Israel Academy of Science (ISF), and from K Mart Center for Retailing and International Marketing. We wish to thank Shirly Ronen and Maytal Bar-Hai for their help in running the experiments and coding the data, and Gideon Keren and Naomi Yahalom for commenting on an earlier version of this paper.

References (40)

  • R.B. Dean et al.

    Forewarning effects in persuasion: Field and classroom experiments

    Journal of Personality and Social Psychology

    (1971)
  • B.M. DePaulo et al.

    The accuracy-confidence correlation in the detection of deception

    Personality and Social Psychology Review

    (1997)
  • B.M. DePaulo et al.

    Cues to deception

    Psychological Bulletin

    (2003)
  • M.E. Doherty et al.

    Social judgment theory

    Thinking and reasoning

    (1996)
  • B. Eitam et al.

    Non-conscious goal pursuit in novel environments: The case of implicit learning

    Psychological Science

    (2008)
  • R.H. Fazio et al.

    Attitude formation through exploration: Valence asymmetries

    Journal of Personality and Social Psychology

    (2004)
  • S. Fein

    Effects of suspicion on attributional thinking and the correspondence bias

    Journal of Personality and Social Psychology

    (1996)
  • D.T. Gilbert

    How mental systems think

    American Psychologist

    (1991)
  • A.E. Jasperson et al.

    An aggregate examination of the backlash effect in political advertising: The case of the 1996 U S. Senate race in Minnesota

    Journal of Advertising

    (2002)
  • B. King-Casas et al.

    Getting to know you: Reputation and trust in a two-person economic exchange

    Science

    (2005)
  • Cited by (98)

    View all citing articles on Scopus
    View full text