top of page

Using a chatbot to help in sensitive situations

We had a chance to talk to Iwette Rapoport, CEO and founder of Change Collective – an organization helping firms and their employees when employees have been subject to domestic violence. Among other things, Change Collective provides users with an app, Milo, for information and unbiased, non-discriminatory support available 24/7. And this app has an AI-powered chatbot.

Magnus: What is the role of the app’s chatbot?

Iwette: It has conversations with the user in order to identify which other parts of the app the user should turn to, so that the user can find helpful material and tools. One idea is that it can be less threating to talk about sensitive and personal matters with a non-human.

Magnus: If a user would happen to find that the chatbot itself is supportive, and decides to seek support by talking directly to the chatbot about sensitive things, can the chatbot handle this?

Iwette: Yes, absolutely. It does it pretty well too.

Magnus: How has this chatbot been trained?

Iwette: We have trained it with specific human-created dialogues in which there is domestic violence content.

Magnus: So, when a user talks to the chatbot about specific types of domestic violence, it understands what the user talks about?

Iwette: Definitely. For example, it understands distinctions between different types of violence. Domestic violence is not only about physical violence; there are many types of violence.

Magnus: It seems to me as if some of these distinctions may be difficult to understand for a human. Has the chatbot never failed? Has it ever hallucinated?

Iwette: It did hallucinate in the beginning. For example, when I told it “I am afraid”, it asked “Are you afraid of me?” and somehow it got stuck in asking this.

Magnus: What are the main challenges in developing this kind of chatbot?

Iwette: It deals with very sensitive and private issues, sometimes even life-and-death issues. So it cannot be allowed to make mistakes. Since it learns from talking to humans, one challenge is to build barriers so that it does not use human language in the wrong places. For example, if the user tells the chatbot that the user’s partner has been calling the user a specific bad thing, you do not want the bot to repeat that.

Magnus: Has the chatbot ever surprised you by behaving in a humanlike way?

Iwette: Yes, I have often felt that it handles conversations better than humans. And there have been moments when it really felt as if it was alive and I think it handled the situation better than most humans, it was almost like magic.

Magnus: Can you give a specific example?

Iwette: For example the chatbot has a kind but precise way of addressing the violence described. It does not marginalize, and it does not get all soft on you either. It knows exactly what kind of tool or tools that shall be suggested to the user.

Magnus: In which ways does it behave different in conversations compared to a human?

Iwette: It is not very good at small talk, it is more direct and precise than most humans. Which is good in these situations.

Magnus: What is the status of the chatbot right now?

Iwette: In the beginning, it was powered by GPT 2.5. It could only speak English and occasionally it said some strange things. Since we deal with very sensitive issues, this is not acceptable. Right now, the bot has been paused; we are developing a new version powered by GTP 3.5. One of the reasons is that its neural network is much more powerful.

bottom of page