top of page

Findings

So far, the studies in the project Interactions with Artificial Humans have generated a number of findings. The idea is that they should be useful for those who develop and/or employ non-human agents for the purpose of using them in interactions with humans (such as customers and employees).

In any event, the project has identified several characteristics and behaviors of non-human agents that produce a positive impact on humans’ evaluations. Here are some examples:

01

If a virtual agent displays happiness, it produces a better overall impression.

This may seem fairly obvious, at least if we think that a happy human will be rewarded with better evaluations in human-to-human interactions (i.e., many of us prefer to interact with others who are happy rather than unhappy, particularly in service situations). The challenge, however, is how a non human should display happiness. Many users are likely to think that contemporary non-human agents cannot really experience any emotions at all, so it is probably not a good idea to program non-human agents to say things like “I am so happy today!”. In this study, the non-human agent was a chatbot that communicates only with text (in the same way as ChatGPT), so the challenge was to make the agent appear happy by its use of text. And this was indeed possible. Avoiding negative words and adding exclamation marks did the job.

https://doi.org/10.1016/j.jretconser.2020.102401

02

If a service robot displays warmth, this will have a positive impact on how it is evaluated.

Warmth is indeed a central aspect of being human; we like those who appear to be warm persons. Would it be the same for a service robot? Probably. One would expect that a robot that displays warmth in relation to a human user would be liked by this user. In this study, however, the effects of robot warmth were assessed in the context of a robot-to-robot interaction (it is expected that there will be swarms of robots in the future, and it is also expected that we humans will sometimes watch when robots interact with each other). That is to say, when a human watches two robots interact, would it matter if one robot displays warmth towards the other robot? Yes, it does matter, so a robot should display warmth in a humanlike way vis-à-vis other robots.

https://www.emerald.com/insight/content/doi/10.1108/JSM-01-2021-0006/full/html

03

If a robot is asked by one user about the behavior of another person, who the robot has observed and recorded, it should not give away this information.

Robots sharing the same environment with several humans would be able to know a number of things about several humans in its environment. And it is likely that we humans (we are typically socially curious) would like to ask such robots about the behavior of others. “What did X actually say about me while I was absent?” is an example question. This study shows that a robot is evaluated more positively if it refuses to give away information about others if this information may violate others’ privacy.

https://www.emerald.com/insight/content/doi/10.1108/JSTP-09-2022-0196/full/html

04

A virtual agent, who is perceived as an expert in a domain, should avoid admitting that it does not know the answer to user questions.

Humans in a professional human-to-human  context are advised not to let others know that they lack knowledge. Thus, it is recommended that we should avoid saying things like “I do not know”. Is this a good advice also for a chatbot? After all, the perhaps most impressive chatbot to date, ChatGPT, occasionally admits that it does not know certain things. In this study, however, a chatbot bot was evaluated more positively if it never said “I do not know” in response to user questions. To display intellectual humbleness, then, does not seem to be a good idea for a bot.

https://www.tandfonline.com/doi/full/10.1080/09593969.2023.2253003

bottom of page