A week ago, the US Centers for Disease Control published a new report on suicides in the country that showed a new increase in them after two years of decline. “It is a real public health concern that we are trying to address through the use of some level of artificial intelligence,” he read.

Behind these words is Albert Skip Rizzo, director of the Medical Virtual Reality team at the Institute for Creative Technologies at the University of Southern California (USC), in the United States. The clinical psychologist’s work in virtual reality and artificial intelligence probably lies behind his evident optimism in assessing emerging technologies in relation to mental health. Rizzo has been able to see first-hand how the quality of life of people with psychological suffering was improved thanks to the help of innovative digital resources.

How can artificial intelligence affect psychological well-being?

It depends on the context, but it certainly is a very powerful tool that is not going away. If you think of a person who is in a state of discomfort and has no one to talk to, offering something like a virtual helper can be helpful. If artificial intelligence is programmed properly and ethically it can help people explore their own problems and gain some kind of comfort. In addition, he is always available, never bored and always pays attention.

Never tires

Yes, and he has encyclopedic knowledge of any clinical condition or therapeutic approach, as well as remembering everything you’ve told him. These are some very powerful attributes.

So how can chatbot-like technologies be useful in helping people with disorders?

A certain percentage of the results in a treatment is directly related to the therapeutic relationship. It is about the empathy that the clinician can show in the sense of shared experiences, since he has his own personal history, not so in the case of artificial intelligence. So there are a number of tremendous attributes to that technology and a key missing piece. However, it also offers people the opportunity to talk freely about their difficulties. There are a lot of people who, because of stigma, are never going to see a therapist, don’t have the money, or don’t know that treatment is available.

What work have you developed in your team?

We started using virtual reality to train clinicians to be more skilled in their work. We built a character to teach medical students how to conduct a clinical interview with a sexual assault survivor. We are now building virtual patients to teach psychologists to implement what we call a suicide safety plan. Helping someone at risk of suicide strategize so they don’t do an impulsive act is very delicate and empathetic work.

You also have virtual assistants.

Yes, we are currently developing a mobile app for military veterans at risk of suicide that has a virtual human they can communicate with at any time. The app could even track your smartwatch and learn if you’re having a panic attack because your heart rate has increased or if you haven’t slept well in the last three days, a risk factor for suicidal impulses. We also have another project aimed at health professionals, who present some of the highest suicide rates, in addition to other psychological pathologies.

If someone really needs help, at some point a real person should intervene. How would both parties fit together?

Our way of working has always considered that what has been developed is not a replacement for a real person, but rather a way of filling a need to be covered. If the software detects that a person is at risk, it will focus on helping them access treatment with a real person, on socializing them so that they feel comfortable asking for help. That’s why we don’t call our apps virtual therapists or AI therapists. They are not, but rather support agents or guides.

Why do we trust these artificial intelligences the way we do if we know they are not real?

We see movies with actors and we know they’re not real, but we have emotional responses and we get invested in whether they succeed or if they die, or even if the dog dies. People suspend disbelief once they start having an interaction, and the more realistic the interaction becomes, the more it will suspend disbelief. I think this is something adaptive that has evolved with us. The brain is conditioned or tuned to take things that are not reality and turn them into a reality of its own. And now we have reflections of ourselves that understand what we’re saying and return responses so believable that we forget for a moment that they’re not real. I have seen videos of parents interacting with virtual representations of their daughter, who had committed suicide, saying things like “how are you where you are?”.

How can this type of technology change our way of interacting?

If you fall in love with an artificial intelligence that you will never meet, is that a good or bad thing? There are many very lonely people in the world. Some research shows that this extreme loneliness has the same impact on health as smoking 15 cigarettes a day. So could this kind of virtual friendship or virtual relationship fill that void? Perhaps it will make them feel less alone and more likely to develop outside relationships. Or it could be the opposite: I don’t need real people, I have my software that never judges me. Being optimistic, I would say that if AI is used by an individual in a way that makes them feel more comfortable expressing their feelings or talking about difficult issues, that would carry over into their daily interactions with real people.

And how does it affect us if we cannot discriminate if we are interacting with artificial intelligence or with a real person?

That’s an important thing to consider, yes, especially when you’re imitating a person who may be very well known. I think transparency is essential. There has to be a prompt, something that informs the person that they are, in fact, talking to an artificial intelligence. You have to have ethical people watching it to prevent as much damage as possible. Just as Asimov had the three rules of robotics, we have to have the three rules of artificial intelligence: that it be transparent, that it be designed from a prosocial perspective, and that it be made to do no harm. Despite everything, I support the idea that there is so much positive potential in medicine, health, education…

Precisely, recently a friend who is studying some oppositions tried ChatGPT for the first time and felt that she had discovered a whole world to prepare the topics.

Yes, and the challenge here is precisely that it is a very, very powerful tool. However, I am concerned that kids do not develop certain processes for the expression of their thoughts. I grew up having to push myself every time I had to write something. I think that hard work developed certain dimensions and processes in my brain. So I already have a base on how to express concepts in writing. Now I can go to ChatGPT and ask him a good question that leads to something interesting. In fact, I had time this morning and I asked him your questions.

And how about the result?

Probably better than my answers (laughs).

So maybe this artificial intelligence leads to a decline in cognitive processes?

Well, deep down I think this is going to be a tool that amplifies our cognitive abilities. When the first calculator came out, people said that young people will never learn to do math, but that never happened. In fact, the calculator led to our being able to spend more time thinking about higher level math. Maybe ChatGPT or some other similar AI will make us “smarter” by freeing up time to think about content in other unique ways. There’s both the upside and the downside, and there’s no way we can know what’s going to happen. But yes, I am optimistic.

So I’m left with the fact that your balance leans towards the positive side.

Technology is always a double-edged sword. Think about the fire. Fire is essential for keeping you warm and cooking, but it can also set your house on fire. Or cars: they take you to places you couldn’t get to otherwise, but you can die in a traffic accident. Therefore, it cannot be expected that, in the use of artificial intelligence, something different will happen. There will be success stories, but we are also going to witness other more negative parts that we are not going to be proud of, whether it is anticipated or unforeseen things that we cannot even imagine. It is inevitably going to happen and we are going to have to be vigilant. That is humanity.