Daito Manabe’s artistic facet covers so many themes that it is difficult not to miss any. And yet this Tokyoite usually defines himself first and foremost as a programmer, perhaps the facet that remains most hidden among his projects, which include dance with drones, design of shows like those of Björk or the closing of the Rio 2016 Olympic Games or experiments to unite music and the human body by using electric shocks in the muscles of his face. Founder of the Rhizomatiks studio, Manabe is a regular at Sónar, where this year he has participated in a masterclass and two audiovisual shows. This Saturday, with a slightly lighter schedule, he attends La Vanguardia at the Montjuïc venue.

You have said that you want to link brain activity to music. What are you looking for with this mix?

When you listen to music, images instantly come to mind. What I’m trying to do is extract this part that immediately comes to mind as an output. In the digital world, audio and video are treated differently. If it’s digital, like a microphone or a camera, it’s already divided, but the human brain is a system and reacts together to every input. The brain just takes the information and it’s all in one. My work deals with how to integrate the connection of the sound and the visual in the work. That’s why I use the technology called brain recording from which I create the work.

Is the human being’s ability to think only in the brain or is it throughout the body?

There are three universes, three spheres. On one side is the real world, another universe is the brain and the third is the body, the border that connects these two worlds. The size of the universe and the outside world is different, but the complexity is almost the same. What provides less information is the human body and the senses. For example, we can only introduce very limited things through our body, our muscles, such as vision, touch or hearing. We can only enter limited information. You use either the sensory sense or the auditory sense, but it can only receive a few limited inputs of information. On the other hand, when you interact with the outside world through your body, you can only do a few things with your hands, with your muscles, so you can only affect the outside world to a limited extent. Art, on the contrary, is a work that can be expressed in many ways. Ultimately, we don’t need a body: the universe and the outside world alone are enough. We need to feel the limits of the body and go in that direction. You don’t need the body itself, you just need the universe, and the brain within the universe. So when artists are interested in the brain or the body without the meat, it’s because they think there’s some kind of limitation there that they want to overcome. This is something common, not only for me, but for many artists.

You have done dance with drones, when do you think we will remove the human dancers and play only with drones and it will be okay for the people?

I have done works that only use drones, but I have noticed that if there are humans, the audience can feel it as if it were also themselves, they are more interested in whether there are humans on the stage.

Can humans bond with robots?

There are three categories of robots: the humanoid, the non-human, and the dog-shaped robot. We see the robot with legs and hands or arms as a living creature. On the other hand, if someone kicks the dog robot, it will make you feel angry or uncomfortable. If the robot has arms and legs as well as a face, the human will start to think that it is more like him, and people will start to feel attachment and feelings towards the robot. In Japan there is a mobile phone called Robophone that is shaped like a robot. It has a human form, because the human being can transmit feelings and emotions to it. Its maker thinks that they can get other information from users’ feelings.

In his work he fuses science, computing and art. Where does it start?

All my work is a reflection on the connection between sound, video and the human being. The body and technology is the universal theme, there is a lot of interesting research published on this subject, but although the technology itself is interesting, it might not be usable in the real art world. I think synesthesia could solve the mystery, which is why I got interested in brain recording. The technology I work with is based on this. I am currently working on an experiment in which I grow the brain cell of the rat and then I introduce electronic signals to it in order to train them and make them play. What interests me is how to send signals from the outside to your brain, that’s the original goal. You can’t open up the human brain and send out electronic signals, but if it’s a brain cell, you can. AI is already working now, it’s here, but I have an expectation that this brain cell culture experiment may be the next breakthrough in artificial intelligence. The next question I’d like to ask about AI is how it will influence art.

Do you think this artificial intelligence can autonomously make art on its own without human direction?

We have a lot of data that we’ve learned from the chat, but I don’t think we can do anything more than that. Current AI, for example ChatGPT, has a training data set from which the model is created. The limitation is that they cannot go beyond this data and this model. It is difficult to invent something that has never been done before, maybe it can be done, but the important thing is that the human being is always the one who gives that order.

Will the AI ??create new music that we haven’t heard before?

The way in which order is established is the crucial part, it is necessary to fight against the idea that art is going to come out of AI, what art must do is resist, reject it. On the other hand, making art with AI is not just a matter of images or music, I believe that vector information will prevail as an art form, it will become a work. The vector information in latent space is going to be the seed for the next art. Latent space is a very high dimensional space that only AI can understand. What the human perceives as the output of the AI, in two or three dimensions, is actually low dimensional information for the AI. But there is hyper-high dimensional information in the AI ??itself, something that the human being cannot perceive at this stage, and which can be a work of art.

If it cannot be perceived, how can it be managed?

When the human being sees or feels the AI ??artwork, it does so in a low dimension, which is the image, two-dimensional, but before that, the AI ??processes the information in a higher dimension. That information can be perceived as a work of art, it is something that will definitely happen in about two years.

What do you feel when you perform live?

I feel like I’m able to complete the job, I do the same during rehearsals, but the first time I’m able to complete the performance is because of the audience’s response. In the rehearsal there is no public, but we do it with the same audio level and the output in the same visuals. But it is different, when you perform in front of the public the art and the work are complete.