Decentralised information exchange within social networks is an important channel shaping public opinion in the digital era. This type of information source is however, by no means bias free.
Individuals are, in fact, not willing to talk equally easily about all topics, are not equally willing to disclose all facts or opinions and are not equally likely to talk to everyone. A 2016 poll run by “CareerBuilder” showed that 42% of respondents avoid talking politics at the office, while 44% might be willing to talk about it but would interrupt the conversation if it becomes heated.
Our paper is a first step in investigating perceived disagreement aversion as a behavioural trait driving informational biases arising in social networks. We suggest a formalisation of perceived disagreement aversion and analyse its consequences for the incentives to share verifiable information with other agents which may have different prior beliefs. We study a simple model of information sharing. Two individuals face each other and are aware of each other’s prior beliefs, which typically differ (is climate change human caused? Is high taxation conducive to lower growth?). One of the individuals holds a piece of relevant and verifiable information. Given their aversion to perceived disagreement, will the informed individual always be willing to disclose the information, or will they prefer to conceal it?
Findings
A first finding is that the amount of information transmission in equilibrium can be characterised based on differences in simple statistics (mean and variance) of the prior beliefs of the parties involved. Specifically, similar variances and moderately different means are conducive to maximal information sharing. This suggests information sharing incentives are best when individuals have different prior opinions but a similar willingness to revise their opinion (a similar level of confidence in their prior opinion).
Second, we demonstrate that perceived disagreement aversion generates echo chamber-like dynamics. If receivers of information are randomly matched with senders of information and priors are publicly observed, a more confident receiver is less likely to encounter contradicting information. We show that confirmatory bias, in this case the need to search for favour, is strengthened if senders, who dislike disagreement, can choose whom to be matched with. Senders will, in this case, only interact with receivers whose prior opinion is similar to their own.
Furthermore, we evaluate the phenomenon of political correctness. In our baseline model the sender aims at being "politically correct," signals and information that will reduce perceive disagreement. There is an ongoing debate about the value of such self-censorship. Linking to this debate, we evaluate the value of commitment to a full disclosure strategy and find that if the sender is the most confident party, then equilibrium disclosure by a perceived disagreement averse sender induces higher perceived disagreement on average than commitment to a full disclosure strategy.
Finally, we extend our agenda to the study of information generation in groups. We consider a game of collective acquisition of information by individuals who are all averse to perceived disagreement. Though the game is strategically different from our disclosure game, it addresses a similar problem of learning in (two person) groups with different prior beliefs. We find that moderate heterogeneity in prior opinions (prior means) and homogeneity in prior willingness to revise opinions (prior means) optimally incentivises information acquisition.