Susan Schneider researches the mind and artificial intelligence. She is the NASA Baruch S. Blumberg Professor at the US Library of Congress; she directs the Artificial Intelligence, Mind and Society Group at the University of Connecticut and is the founder and director of the Center for Future Mind at Florida Atlantic University. She researched superintelligent AI at NASA for two years and is the author of four books. In the last of them, Artificial Intelligence. A philosophical exploration of the future of mind and consciousness (Kōan Editions), explores the possibilities of conscious AI and the evolution of the human mind with the possibility of artificial brain implants.

Your book on consciousness and AI was published in 2019. Would you change anything today?

Yeah. A couple of developments, because there’s been so much going on with artificial intelligence… One of the things is that large language models have taken off. We didn’t know anything about ChatGPT then, but there is one thing that appears in that book, which is how to test for awareness. And I mention that deep learning systems are difficult to test because they are trained on human data. That’s why they can say anything they can get out of their training data. So it gets complicated. These systems are black boxes.

Why is it so hard to find out if an AI is conscious?

What makes AI consciousness incredibly difficult is that even though we can say that we are conscious as humans, we don’t know why we are. We do not have a complete philosophical understanding. And we don’t know the scientific answer to why we are conscious in detail. We have theories, but they are not incontrovertible.

Is the possibility of replacing the human mind with one of chips far away, as he explains in his book?

I haven’t seen any more success stories since the publication of the brain chip book. Now, there could be all sorts of things going on that aren’t publicly available, like military projects.

On what factors does that depend on to be a reality?

One thing that could happen, and I take this very seriously, is that these new large language models are very intelligent and it may be the case that as they develop, they start to make groundbreaking scientific discoveries and advise medicine on how to successfully create brain chips. There’s a project by Theodore Berger, which I talked about in the book, that’s really exciting: an artificial hippocampus for people who have terrible memory disorders. Same for ALS patients. This type of research facilitates the type of projects that Elon Musk is doing, but it moves very slowly.

The more the level of technology rises, the more we talk about ethics. Isn’t it a paradox?

It is strange, because we are used to conscience and rights as a matter of the biological realm. Anyone who goes to the OpenAI website and signs up or goes to Microsoft Bing can have a very interesting conversation with the chatbot. And it’s easy to wonder if he has feelings. Even I am careful when I interact with him and say thank you.

What do you think about former Google engineer Black Lemoine saying LaMDA had a conscience and getting fired?

I respect your opinion. I could be right. There is no professional, philosophical and academic way to prove the possibility, but also not to rule it out as correct. It made me think that Google didn’t like it being known. That is wrong. Google should not want to hide these issues. They’re going out anyway. It’s also a bad idea that Microsoft is coding AI systems to claim they are unaware. It’s a mistake. It’s the last thing you should do. If an AI claims to be conscious, society has to deal with it.

What are the main ethical challenges of developing a conscious AI?

Those who have spoken the most in these debates have been what we could call defenders of robot rights. They talk about how terrible it would be to be wrong and assume that they are not conscious when they are, because they could suffer and feel a series of emotions. If on Earth there is another group of highly intelligent individuals who are conscious, we would be sharing the leadership of the planet. We would be giving them the kind of rights we give humans so one AI, one vote. AI can outsmart us in all sorts of ways. And they are just nascent technologies. I believe that within five years, we will have a superintelligence. I don’t see a hard limit on developing these kinds of large language models. I think they scale them. As you provide them with more data they become more and more sophisticated.

Is it a real risk that an AI would want to kill humans to satisfy its goals?

I haven’t the slightest idea. At my center we just organized a talk with Eliezer Yudkowsky. He is absolutely convinced that we are doomed. We understand the logic of his reasoning as impeccable. It’s definitely a risk and that’s why we have to create safe AI now. You have to take all of this very seriously. The strange thing is that technology companies are pulling forward. I imagine the US government and the US Department of Defense, China… have an arms race over AI right now.