top of page

We need to talk more about anthropomorphism

We must expect that we will spend increasingly more time with various AI-based technologies – at home, at work, and in commercial environments such as restaurants, hotels, and airports. And we can be sure that the interfaces of these technologies will have many similarities to us humans. Already, for example, service robots have arms and legs that resemble humans, and chatbots often have a face, an alleged gender, and a human-sounding name. They receive our messages (and respond) in human language. It is because of the consequences of such human-likeness that we need to talk more about anthropomorphism. Here, I want to highlight two possible consequences: we may lose some of our ability to be critical of new technologies, but it is also possible that we may become too critical and not appreciate what is actually good for us.appreciate what is actually good for us.
 

Anthropomorphism is our tendency to react to a non-human object in ways that resemble – or are identical to – how we react to real people. Anthropomorphism can perhaps be seen as a primitive reaction of the same kind as believing in elves and trolls, but it is now well- documented in many studies [1]. Even enlightened modern humans engage in anthropomorphism. In most cases, it is not the result of careful analysis but rather an almost automatic reaction. We know that a chatbot is not a human, especially if we discover that it struggles to answer simple questions. Nevertheless, we still react in ways that resemble how we react when we encounter a real human.

Anthropomorphism can manifest itself in several ways. A common way is to attribute to a non-human agent qualities and abilities that we use when describing ourselves or other people. Warmth, competence, intelligence, and morality are examples. Warmth and competence are considered two important dimensions for evaluations of other people [2], and they form the basis for many stereotypical reactions (such as thinking that wealthy individuals are unfriendly and competent). It has also been shown that we can attribute to non-humans – especially human-like robots – advanced human qualities such as agency and the ability to experience emotions [3]. Participants in studies have even attributed to non-humans what is called theory of mind, which entails a competence to understand what is in the minds of other people [4]. This is a truly human competence, in the sense that many human activities would be impossible if we did not have an understanding of how other people think. One might wonder if a non-human can really have a sensible understanding of what is in human minds, but you have probably noted that Chat-GPT has already passed tests that are commonly used to determine if humans have theory of mind?

 

Anthropomorphism can also occur when we react behaviorally to a non-human. Politeness is one such behavior. An early study shows that we tend to be polite to computers, at least if there is something in our interaction with a computer that reminds us of an interaction with people. In this study, researchers attempted to mimic an interactive private lesson with a human teacher; they designed a computer-based lesson so that the computer taught facts in a way that was tailored to participants’ level of knowledge. And when it was time to evaluate the “lesson,” it emerged that participants said kinder things about the teaching when feedback was given directly to the computer compared to when it was given to someone else – this, then, is similar to what is likely to happen to feedback about a human teacher [5].
 

One explanation for anthropomorphism is that we have well-developed concepts of what it means to be human. However, we have very limited knowledge of what it is like to be a non- human. Therefore, trying to understand a non-human can be challenging. This is particularly relevant in the case of AI, as few of us understand how the algorithms work (and it is not certain that AI itself has such an understanding). The result is often that we take a mental shortcut and rely on what we know more about, namely what it is like to be human [6]. This is highly effective because it allows us to understand an object or an agent without needing to' start from an informational zero point. Many types of similarities between a non-human and an actual human – even superficial similarities – can trigger this process. Thus, anthropomorphism is only one of several ways to attribute certain properties to a specific object based on information we already have in our mental categories for similar objects. Perhaps we are evolutionarily adapted to be especially effective with information about humans. Being able to quickly realize that another person is approaching, and quickly understand whether they are friend or foe, must have been advantageous for our ancestors. It may even be advantageous to be oversensitive to humans; the cost of mistakenly thinking of a non-human as a human can be quite low in a world full of potential threats. When someone walks alone on a deserted street in the middle of the night, in neighborhoods where he or she does not feel safe, it may be better to believe that a trash can is a person rather than a person is a trash can.
 

A particularly important reason why we need to talk more about anthropomorphism is its effects – and side effects – on our perception of new technology. One of these effects has been identified in several studies, for example, concerning cars, brands, virtual agents, and robots, and can be summarized as “what-is-humanlike-is-good” [3]. In other words, anthropomorphism tends to positively influence our overall evaluations of non-humans. One reason is that we have this tendency regarding humans; the concept of “human” has a positive connotation for most of us. Without this positive charge, many everyday tasks, such as forming a family, taking care of children, and creating larger cooperative groups to achieve common goals, would be difficult. Evaluations (i.e., assessments in the bad-good dimension) are indeed central to us humans: we seem to be programmed to perceive ourselves and the world around us in evaluative terms. Moreover, we are typically allowing our evaluations to guide our behavior in relation to ourselves, other people, objects, and a variety of activities. This is reflected in marketing research and practice where there is a plethora of models and theories with various evaluation variables – for example, attitudes, customer satisfaction, and perceived service quality. Such variables usually influence whether we want to buy a particular product or service at all, both the first time and repeatedly.

What we need to talk more about is that the effect “what-is-humanlike-is-good” may result in us losing some of our critical stance towards new technology, which in turn can lead us to embrace it even though we should not do it. But the opposite is also possible. Anthropomorphism can prevent us from adopting a new technology, even if it has the potential to improve our lives, because “what-is-humanlike-is-good” can lead to overly high expectations of what a particular human-like technology can achieve. And when we realize that the technology does not meet our expectations, we feel dissatisfied. An example is the Henn-na Hotel in Japan, known for its (very) human-like robots that were responsible for customer service. However, they did not function optimally; for example, they woke up snoring guests because they thought these guests were asking questions. So, at this hotel, human employees have been brought back. Unfulfilled expectations can indeed lead to unnecessary skepticism towards applications that actually have the potential to improve our lives – if we just have a little patience with the fact that they need more time for development. This is also something we need to talk more about. Some robot researchers have actually thought about users’ expectations in this way and argue that it may be good not to make robots too human-like.
 

Another innovation-inhibiting consequence of anthropomorphism arises when a technology approaches a very high degree of human-likeness. According to some researchers [7], we then end up in “the uncanny valley.” These are the main assumptions: When a non-human increases its degree of human-likeness, our positive evaluation also increases (which means that we get the reaction pattern “what-is-humanlike-is-good”). But when the non-human’s human-likeness reaches a very high level, it instead becomes uncanny – and at this point, the positive evaluations decrease. One reason is that an object with a high degree of human- likeness creates discomfort when we try to classify the object (is it a human or not?). And this discomfort is demanding; it calls for extra effort to reach an understanding (and we rarely like effortful tasks), so the negative charge of the extra effort can color evaluations so that they become less positive [8]. Another reason is that a non-living object that resembles humans, but still not completely, reminds us that we humans are living rather than non-living. It confronts us with the fact that what is alive will eventually die, which has a negative connotation and can lead us to “the uncanny valley”. However, there is not much empirical support for us ending up in this valley. Presumably, one reason is that the robots and other non-humans shown to participants in studies have not yet reached a sufficient level of human- likeness.

Yet another innovation-inhibiting consequence of anthropomorphism is that a high degree of similarity between non-humans, especially robots, and humans can give rise to the feeling that we humans, as a unique life form, are threatened. This means that a very human-like robot, a so-called android, can evoke the strongest feeling of threat. It should be noted, however, that there are not yet many androids that are difficult to distinguish from humans. Therefore, we should perhaps not expect that a typical service robot, or AI of the type that exists today, will evoke strong feelings of threat. But that day may come. Before it occurs, there is one more reason to talk more about anthropomorphism: to clarify for ourselves if we really are a unique life form.

 

[1] Epley, N., Waytz, A. And Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological Review, 114 (4), 864-886.


[2] Fiske, S. T., Cuddy, A. J. And Glick, P. (2007). Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sciences, 11 (2), 77-83.


[3] Söderlund, M. And Oikarinen, E. L. (2021). Service encounters with virtual agents: an examination of perceived humanness as a source of customer satisfaction. European Journal of Marketing, 55 (13), 94-121.


[4] Söderlund, M. (2022). Service robots with (perceived) theory of mind: an examination of humans’ reactions. Journal of Retailing and Consumer Services, 67 (July), 102999.


[5] Reeves, B. And Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People. Cambridge, United Kingdom: Cambridge University Press.


[6] Epley, N. (2018). A mind like mine: The exceptionally ordinary underpinnings of anthropomorphism. Journal of the Association for Consumer Research, 3 (4), 591-598.
 

[7] Gray, K. and Wegner, D.M. (2012). Feeling robots and human zombies: mind perception and the uncanny valley”. Cognition, 125 (1), 125-130.


[8] Wiese, E., Metta, G. and Wykowska, A. (2017), Robots as intentional agents: using neuroscientific methods to make robots appear more social. Frontiers in Psychology, 8, (October), 1-19.


References

bottom of page