La versión traducida al castellano de esta entrevista está disponible en nuestra revista impresa dedicada al pecado #JD13
Defining Luc Steels (Belsele, 1952) simply as a scientist, would be way to poor when we are describing a huge-experienced Belgian linguist who likes art —he wrote an opera and a theater play, besides doing some performances in his youth—, and went to the US to study computer science and artificial intelligence at the MIT with heavyweights like Marvin Minsky and Seymour Papert. Founder and first director of the Sony Computer Science Laboratory, this researcher tries to identify and understand the origins and evolution of language using models and simulations in which several robots develop autonomously their own communication system.
I’ve read that the thing that fascinates you the most is the making of meaning, a topic that has become a thread through all your work over the years. Is it a question that remains unsolved?
Yes, this is my main topic. It’s a topic that traditionally artists and the people of the Humanities are concerned with, but I try to approach it from the viewpoint of building artificial systems for which we could say that they are not only able to treat information, but to create new meaning. I consider the problem of meaning to be the big limitation for today’s Artificial Intelligence. But where are we regarding AI? On the one hand, AI research is clearly very advanced. People don’t know it, but if you use your smartphone or any search engine there’s AI technology behind it. In my opinion the fundamental limitation has to do with meaning. Almost all of the applications that we see today avoid meaning. Let me give you an example: You could now go to Google Translate. Everybody who has used it has had some time an amazing experience because the translation is good, it’s what you expected, but some other times it’s ridiculous. The question is, why is that? Well, it has to do with the way its systems work. What they do is they have access to very big databases of human inputs and they process that information. And I don’t say understand it, I say process it. In the case of translation what they do is they have access to texts where there is a known translation. Then they will pair little bits of texts from the source with a little bit of the text. Then they pick these little bits, which are called engrams, and find them back in your text. Then they take this little bit from the text and they puzzle it together. But they don’t understand the text, they don’t know what it is about. And they don’t try to do very deep linguistic analysis, they have no clue. That’s why I say they don’t use meaning, they purely use the information processing. This is true also for search on the web, because the data bases look for keywords. It’s incredible the scale in which this is happening, it’s just amazing. And we all use it, it’s useful. I use it, and I am happy, but we have to realize about the limitations of it. And so all these people who talk about artificial humanoids taking over, and that we are almost there in ten or twenty years, ignore this fundamental problem that we have. This is a point I wanted to make about the current limits of AI.
And what are you trying to do about it?
In the experiments that I have been doing with robots we try to set up a situation in which meaning is important. Meaning is like making distinctions in the world that are important to you: if it’s red you stop and if it’s green you can go. It’s as simple as that. This distinction is important to you because it’s relevant. And for someone who is color blind the distinction is the position of the lights. So what is meaningful depends on your sensors, on the history of your interaction with the world and on what is shared in the cultural environment in which you are. When we do experiments we have to bring all these ingredients in the experiment.
You started studying languages and philosophy but later moved on to computer science and electrical engineering and, with that, Artificial Intelligence. Was it a change by necessity?
Yes. If you do science you can do it in two ways: observe, see a pattern in nature, make a theory and make a prediction to see that pattern again in the future; or experiments. And in experiments you have your system that you want to understand and you poke it. You change something in the environment —what they do quite often in the lab I work, they change something of the internal structure— and then you see the reaction. These are experiments where you manipulate reality. There are psychologists who are doing that, they set people up in certain circumstances. But, of course, what you can do in this way is extremely limited. Therefore, there is another way of doing experiments, and this is what I am doing: let’s build an artificial system that mimics or emulates the system we are interested in, and then we can poke that system. So we do experiments. We don’t really do applications. Of course people see a robot and ask “What can it do?” And then I have to say “Nothing, that’s not what we are about.” We set them up and they do things, like walking or picking up objects. It’s a very complicated system, and then we can poke it. We can observe everything going on in their behavior and also internally in the system. And that’s a great thing. With humans we cannot really observe what is going on in the brain. But if you poke this artificial system you can measure everything, you can restart and redo the experiment… it’s a new method. It’s what they do now in synthetic biology: they build artificial DNA and they do experiments. It’s something necessary because we cannot do this with humans. And I am also kind of not a big fan of animal experiments. One of the bad experiences in my life was when I visited a lab in San Diego. They suddenly opened a door where they did experiments with apes. I had never seen that before. I saw those animals with their brain partly open… for me it was pretty shocking. I am not saying it shouldn’t be done, but it’s not my way. I feel that if we want to do the experimental approach, which is very powerful, we can do it by building artificial systems and then poking them.
You were very lucky to discover very early in your career the benefits of an interdisciplinary context. Was it a usual step in your surroundings? Nowadays it is still hard to see somebody that moves from humanities to science, for example.
At that time it was hard, and it’s still hard nowadays. It’s very hard to go from humanities to science… and certainly to engineering. When I was doing this as a student I discovered the computer… and I say THE computer because in the university at that time there was one computer. You had the mathematicians and the physicists monopolizing it, and I became totally obsessed with it. Computers worked for number-crunching, so at the beginning I thought it was crazy, because it had nothing to do with languages. And actually today there is incredible resistance from humanities to people doing this kind of thing. Computer scientists are quite open, they pick from everywhere, they are pragmatic people, but the other way round is very difficult. But it’s not just humanities and technology and science, but also art. I’ve always been interacting with artists, because, in a sense, art is about making meaning. I feel we are investigating the same question, but in other way.
How was the experience of working with Minsky?
Marvin Minsky is a bit crazy, but I liked it very much. He is very controversial because he likes to upset people, he is a bit iconoclastic. We went along very well. I knew who he was, and the first time I met him I went into his office and he was in his tennis shoes, with his feet on his desk and playing with something, He asked me a question about electronic circuits. He is a very creative and open thinker. There were crazy projects going on like building a gigantic computer with Tinkertoys, or walking on water… there was this creative atmosphere that the AI lacked at that time, which was extraordinary.
No idea was too crazy.
Exactly. Minsky was always open and encouraged the students to do crazy things, which is now almost impossible. Seymour Papert was doing the Logo and then the Logo Turtle… all of this was going on, it was an incredibly creative period in computer science, both in MIT but also at Stanford. It was a network not that big, maybe 200 people in the United States, and they were doing all these things which are now the foundation for Apple and all these companies with window machines and programming languages. It was an incredible time.
It was not a boring time.
Certainly it was not a boring time, but you only realize afterwards, because I was a young student. I got there, all that stuff was happening and I thought that that was what the world was. But later you realize how unique that period is in the history of computing. And it was all without any serious commercial obsession, it was playing. The central place in the lab was called “The playroom”. There was a big carpet and you could lie down on the floor.
When you left the MIT you went to work to an industrial research lab in geophysics. Despite it was a research lab, did you feel a huge difference from industry to university research?
Yes, of course there is a difference. But the strength of the US at that time, and still to some extent today, is that they have those areas of creative excitement, even at serious companies. The company for which I worked was a special company, Schlumberger. It was about geophysics, so it meant logging oil fields and measurements, and I was involved in projects to interpret these measurements. But this company was very adventurous, because people go in oil fields in Africa or the North Sea, and they go with these measurements. They are like cowboys… but they are physicists. Actually, there was not so much difference with the university, because in this company there was a special spirit of self-organization and intensity of working together. Companies need that kind of regions of intense creativity, and as soon as they start to control and start talking about milestones it’s the end of it, it becomes the wrong place.
Are there languages more difficult to learn than others as a second language or it all depends of your mother tongue?
It depends enormously on your mother tongue, because when you learn a second language you start by applying similar grammatical constructions and learning strategies as they would be in your own language. If they are closer, like Spanish and Italian, of course there is confusion, but your strategies work, you know what to pay attention to. For example, if you speak Spanish you are used to cases, complex verb forms and genders. But there are languages which don’t have this kind of stuff, so you have to become sensitive to certain properties of the linguistic material that you totally ignore otherwise. A good example is the tones in Chinese, while we are not sensitive to tones, we use intonation structures. You cannot have both, intonation and tone, so it’s very difficult to learn for us.
Which would you say was the first word ever pronounced?
Do you expect me to know? [Laughs] The first language is gesture, and gesture often comes from actions. For example, grasping. Or like the pointing gesture, which is coming from the grasping gesture. I cannot get to the bottle and you see it, so by doing the pointing I can get you to give me that. And then sounds appear, but just for drawing attention, not as meaningful words.
That was what my question was about, I didn’t expect you to know the first word ever created, but what was the language invented for. To express feelings? To ask for help? To threaten?
If we look at children, is to get things done, to cause an action in another person. It begins with the gesture, then comes the sound and then that sound becomes specialized for certain objects, like bottle. And then the whole thing takes off. It’s pragmatic, for doing things. It’s not for describing things… well, describing a bit, because you have imperative gestures and declarative gestures. But yes, I think it was gestures, then sounds, then words, and then developing a grammar.
Some words are very similar in most languages regardless of their language family. “Mum” (english), “Mamá” (Spanish), “Мама” (Russian), “Ama” (Basque)… Why is that? Is it because the first sounds a baby can say are “a” and “m”?
Certainly babies start with babbling, and of course what is easy for one is easy for another because of sharing the same vocal system, so “Mama” is an important word to draw attention. And the shape of the word could be determined by how easy it is, rather than these Slavic words with five consonants, so it makes sense.
Is the human language an evolution from the animal communication?
There’s a lot of big debates about that. Last week we had a great workshop in Berlin and there was a lot of talk about this particular topic. There’s clear evidence that bonobos and chimpanzees also use this kind of way that gestures become signs. It’s called ontogenetic ritualization. It’s a pragmatic and utilitarian action of which you take a part and it becomes a sign. And there are amazing video clips. It’s a big debate whether they do it or they don’t. But to me it’s a clear indication that non-human were able to invent new signs which were evolved. Most animal communications are innate, like the colour of feathers or alarm calls. Human language is clearly a cultural system, we have all these differences between languages. If we are looking for comparison with animal communication, to me the very first stage of language, the gestural stage, we do see in animals. But then in the human case it took off. But the distance is not so great as you might think.
Language before thinking, thinking before language, or both appeared at the same time?
This is the chicken and egg. There are a lot of linguists, Chomsky for example, who believe than thinking was first and then language is a way to externalize thinking. But I think this is not right, that it is a coevolutionary process. If I want this bottle there is a thought, there is a desire, so there’s something going on in the brain. But then there is language, which helps to think, to form new thoughts. So language is a big motor for forcing the other one to adopt certain meanings. If I ask for the red bottle and you give me the blue one and I say “That’s not the one” I’m encouraging you to make a distinction of colour between red and blue. And I say “encourage” because there are cultures that don’t use colours, meaning hues. They might use brightness or they have words which mean a more diffused kind of thing like ‘it’s alive’. So I think it’s a coevolution in the sense that language forces the other one to adopt and share categories, which are the buildings for thoughts. But also, once you have syntax and grammar you can start formulating and communicating more complex thoughts. I think one is pushing the other one up in complexity. I’m convinced that the reason why humans are so superior in terms of intelligence and capabilities is because of language, not because our vision system is better or because we can run faster, because animals are much better than we are. Language is this kind of key that calls this rapid evolution not just in language but also in thought.
So without language we would be much more stupid than we are.
Without language we would have cultures comparable to bonobos or chimpanzees. Well, that’s a suspicion, but not that far up, because in order to deal with conflict or to culturally transmit information you need language. There was this article a few months ago where they had people making stone tools. One group had to learn from seeing the object and trying to make something similar, another one could just look how it was made but could not talk, and another one could just point, so there was some communication. In the experiment they showed that the know-how for doing this could only be transmitted from one group to another if there was a form of symbolic verbal communication. This is also what these archaeologists and anthropologists refer to when they talk about a point in the evolution of humans when there was this jump in stone-tool technology. They associate that with the origins of language.
You say that a breakthrough in your research was to find out that to get new meanings you have to interact with the world and with other people, so you need a body, motivations and drive. Before this insight, which was your hypothesis? And how did arriving to this breakthrough change your direction?
I didn’t have much of a hypothesis, just a puzzle. Colour terms, where do they come from? If you just sit in a chair, why would you develop colour terms? Or if you are not talking to anyone or doing anything in the world? So meaning has to come out of the interaction with the world and the others. It implies that there has to be some interaction.
In 1996 you founded the Sony Computer Science Laboratory in Paris. How did this opportunity appear and how was your experience in the first years as director of the lab?
The opportunity came because I was invited as a visiting researcher in Tokyo. At the end me they told me to stay there. I thought about that, but I decided to go back to Europa. So they suggested to create a new lab somewhere in Europe, and it would be a sister lab of the Sony Computer Science Lab in Tokyo. This lab is like this small pocket of creative research. At that time it was extremely difficult to find money to do the kind of research that I wanted to do, and so thanks to the Sony Research Lab I could, in relative peace and stability, do the fundamental research that gave these breakthroughs. It’s a small team, you don’t need a lot of money, but some really good people.
You were you involved in the creation of Aibo. Which was the objective of developing a robotic pet?
Masahiro Fujita was an engineer with this dream of a dog-like robot. And I think they sold 250.000 copies of this robot. Sadly, the development stopped at some point because, even though it was a success, for a consumer electronics company success means millions, not 250.000. I already saw a first prototype in 1995, and it still took another 6 or 7 years before it was in the market. In the lab in Paris we worked on a motivational system and a learning system for the Aibo.
2014 was the last year Sony was giving technical support to Aibo owners. Some of their owners (mostly in Japan) had to confront that their robotic dog has to “die” one day and even prepared a funeral for them.
I saw one of those funerals.
Which are the difference between Japanese culture and ours so that they develop a closer relation to robots and technology?
It’s like a child with a doll. The child knows that this doll is not real, but still she wants to be with her. It’s projection, it’s a human thing to project in objects emotions, feelings and care about it. It’s not just a Japanese thing. In African cultures it’s a tree, maybe. It’s a tool that humans use to give meaning to their lives. It’s true that Aibo started to function like that, it’s quite interesting. We find it odd, but we attach big importance to other things that in those cultures they would find strange.
I remember the first time I programmed an Aibo. After some months I achieved that the dog followed a path, and the usual reaction of the people was, “And that’s all? Only this?” Does society have too many expectations regarding the research with robots?
That is absolutely true, and of course science fiction films do not help. It’s actually a big problem for getting funding. But even very smart people. I had an argument once with a mathematician from Princeton, a very brilliant woman, who said that the robot could see. I said it couldn’t. It had eyes thanks to a camera, same as in your smartphone, but that’s it. The camera doesn’t see anything. It’s our brain who has to reconstruct the image of reality, and it works very hard, billions of neurons are very active to be able to do that. So yes, this is a big problem, but also, if understanding everything that happens in a cell is extremely difficult, imagine to understand how works a whole brain. And then try to replicate it! It’s impossible. So this is our message to the world.
“Don’t expect too much”
Don’t expect too much, but also in astronomy they have shown to us that the universe is full of extraordinary objects in the sky, like black holes. And we find the same sort of infinite complexity in the working of intelligence. And this discovery gives also enormous admiration for what is going on in our brain, or even in the brain of an ant.
Through robotic models you are trying to approach to the cultural evolution of language. What about those factors which escape from models, like evolution of language in big populations or during large periods of time?
We try to simulate this at a small scale. I came up with the idea of “teleportation”. The state software of the robot, what we call agent, is in the server. We download it into the robot, activity goes on, we upload the state back to the server and so on. This is a very good way because we can do experiments with populations of thousands of agents, even if we have only ten robots. This way we can investigate very fascinating problems related to a population. For example, how is it possible that millions of people who speak Catalan but have never met each other can still talk to each other without telepathy or some central organization that has coordinated all this. It’s one of the big puzzles.
In line with the above, it seems clear that modeling simple models could be a great tool to understand them, but have you received any criticism for being reductionist?
Yes, but there are two things to say here. We are in fact much less reductionists than most people, it’s another form of reduction because most scientists stick to one phenomenon, like colour vision, they go very deep and everything else is gone. We do something else. We also reduce the complexity but we take the whole-system’s approach. It has vision, language, multiconceptualization, motivation, learning… All of these parts are together, and then we try to make this more complicated. And this is also often not understood by people who are not used to this methodology. I think we cannot isolate the phenomenon. Like in biology, everything is connected to everything else, so this kind of whole-system’s approach is an alternative to reductionism.
Everything is connected. And we’ve mentioned that in your research a multidisciplinary focus seems totally compulsory: robotics, language, biology, neuroscience… Do you think there is some impenetrability between fields, or people tend to be collaborative?
There are few people who are crossing boundaries, but most scientists don’t want to hear about this. The linguists refuse the proposal because they say we are not publishing enough in linguistics journals, psychologists will say that they don’t think this is an experiment, they don’t see the methodology… they will all complain. And most of the time they don’t complain, they just ignore this kind of research. Maybe they are fascinated and think it is interesting, but they don’t integrate it in their own thinking. It’s often difficult, but there are enough people who like to go between fields.
Do you think that in our pragmatic society applied research is killing fundamental research?
This is a really big problem. There’s too much emphasis on application, there is a real crisis in fundamental science.
Is it the natural course of research or we can change this trend?
We have to change it somehow. It has to do with the capitalist economy: it has to be useful. If you are doing something which is not you should not get funding. The funding of science is seen as an investment which should give money. But in order to get apples you need first a small tree, and it has to grow. Or olives. How long does it take for an olive tree before it gives you olives? But they want the olives now. Also, for me science is a cultural activity, it’s like arts. It gives us insights into the world, into who we are. We have to change this system, but I don’t know how.
And what about humanities? Will we end up telling our children that “there was a time when we used to study things like literature and philosophy” and they will see us like madmen?
I hope not, but there is the big risk. And the result is a very surface-oriented culture. With people reading just Twitter messages instead of books they won’t be able to concentrate for a long enough time on a particular subject to formulate ideas and discuss them. It is a really big problem. And information technology has not been helpful.
What is this opera you wrote about a humanoid robot?
This is with Óscar Vilarroya. The opera is a pastiche, kind of a Baroque opera, and I wrote the music for it. It’s about what happens when you get a humanoid in your house, what can this humanoid do and what happens if the singularity takes place. It makes fun of AI, it’s a tragicomic opera. It was a fantastic project.
Please, can you speak on Dr. Buttock’s Players Pool?
Oh, you found that! How did you find it?
We dig very deep!
Indeed you do! I’ve been always working on the edge between art and science, and this was in Antwerp, which is a city smaller than Barcelona, but it has kind of the same flavor: a very artistic feeling. So it was very natural to do artistic activities. This was a collective of artists that I brought together to do performances, and at that time performance was a new medium. We did our performances in museums, in galleries, on the street… it was fantastic.
Did you ever get arrested?
I didn’t, but the police stopped one of our performances.
Do you think science popularization is a must in a scientist career or only an option?
It’s an option, because not everybody is good enough at that, or even interested. People who can do it should. I always did it, but now I am spending more and more time doing this kind of thing. Mostly in the form of lectures. Or that opera. Actually, I also wrote a theater play on a mathematician: Sofia Kovalevskaya. You see? You can still dig and find things. [Laughs] This was for a theater in Avignon. This is maybe a way —through opera, theater, movies…— we can still make contacts with an audience that is no longer reading.
What about science fiction as a tool for spreading science?
I am less sure about science fiction. The author of science fiction almost by necessity has to exaggerate and make a story about a dangerous robot. I don’t read science fiction or go to see this films, partly because I have very little time, and I would rather read a scientific book. There is the danger to assume that all of this is going to be there. If people now are afraid of AI is it because of the real state of the art in AI or is it because they saw one of these movies?
What about the TV show Real Humans? It has a close resemblance with your humanoid robot opera.
I actually like that. It was done with a lot of imagination and intelligence. At least the robots were played by real humans. I like it as entertainment, but they did propose interesting questions. A lot of the movies are so outrageous in terms of technology that to me are very unrealistic, at least short-term.
Could you recommend some books to somebody who is starting to be interested in science?
Here are a few recommendations: Vidas sintéticas by Ricard Solé (2012, Metatemas), La disolución de la mente by Óscar Vilarroya (2002, Metatemas), Intuition pumps and other tools for thinking by Daniel Dennett (2013, Norton), Arrival of the fittest by Andreas Wagner (2014, Oneworld) and Ten Billion by Stephen Emmott (2013, Random House).
Photography: Jorge Quiñoa