How thinking about translation can help you think about bots

by Esther Seyffarth

This is a writeup of the talk I gave at BotSummit 2016 in London. You can view my slides here and probably watch the video of the talk online soon.

I'm a linguist and botmaker whose first language is German. I wish there were more German twitterbots; it feels so nice to have bot tweets pop up now and then in your timeline that are in your own language. I do my best to make German bots and to encourage German speakers to make their own ones, to contribute to the bot landscape out there and deal with some of the quirks and Stolpersteine of this language in creative ways.

Coming from this perspective, I was surprised when a big German paper recently published an article about two chat bots talking to each other in German. The bots were Rose and Mitsuku, both of them internationally famous prize-winners. I was a bit confused - did the teams who worked on those bots teach them more than one language? Or were German-speaking bots international prize-winners? When I started to read the article, I realized what had happened. The authors had talked to the bots in English and then translated the results to German.

But this confused me even more. The introduction had asked the question of what happens when you make bots talk to each other - but how could the readers of the story know what had happened when they couldn't read the original results of the experiment?

This might not seem like such a big deal to you at first. Most things the bots said could be translated to mean approximately the same thing in a different language. But there were also phrases that just didn't make sense: On the last page of the article, one of the bots made a statement that, translated back to English, means "Bush shoplifted the election". Some commenters reacted to this and said that the bots weren't really that good, because they just said random stuff like this. But what had actually gone wrong was the translation. The bot used an idiom and the translator didn't use a German phrase that transports the same meaning, but instead translated the words individually.

Near the start of the conversation between the bots, one of them mentioned the word "stories". The other bot replied by talking about an old woman who lived in a shoe. Again, German-speaking commenters were unhappy with that. Again, the translator didn't use a different nursery rhyme that German readers might have heard before but instead translated the utterance literally.

This made me think. I talked about it with my friend who is studying to be a translator, but we couldn't find a solution for how that article could have been presented to avoid those misunderstandings. My friend suggested adding notes to the top of the article and to some particularly difficult phrases, to explain the choices made during translation.

But why did these translation difficulties arise in the first place? The people who developed the bots had apparently decided to make them seem more human by teaching them idioms like the ones I've mentioned. And this trick probably worked really well, but only on the humans interacting with the bot who understood the references (and didn't read a translated version of the output).

Is it even possible to train a bot with references and a vocabulary that everyone understands and agrees with?

Wittgenstein said "The limits of my language mean the limits of my world". I'd like to change the perspective on that and talk about how the limits of my world mean the limits of my language. My experiences and cultural background form the language I use, and not only the grammar and lexicon, but also the references and idioms I use. I'm interested in how this applies to the process of building bots.

Even among speakers of a common language, there are a lot of differences in how they speak or write. As an example, consider these German words for "refugee":

Depending on your beliefs and political stance, you will choose a different word to talk about this group of people. The choice of words here is at least as difficult to translate as the nursery rhymes or multi-word expressions used by the chat bots, probably even more difficult. And there are also other situations where the choice of words is influenced by political or social factors, such as sexist language or the names used for disputed territories.

Anytime that I, a human being with thoughts and emotions, use language, I transport a bit of those thoughts and feelings just by choosing which words to use to express a concept. It's impossible to talk about refugees in German without showing if you feel neutral (Asylsuchende, Flüchtling), very positive (Refugee) or very negative (Asylant) about the topic. Most of the time, the choice will be subconscious, because you tend to surround yourself with people similar to you and pick up your community's linguistic habits. Sometimes, the choice will be intentional, when you want to manipulate your audience.

A bot isn't able to use that same sort of intuition to choose which lexical item is the most appropriate in a given context. If you want your bot to tell a story or write a poem, you need to be aware how much hidden content is transported by the language it uses. Teaching your bot about pragmatics is extremely difficult, but I think it's worth it if you want to convey very specific feelings and opinions.

Mainstream media like to talk about the Turing test, as if the single most important question in AI research was "does it sound human enough?" But if you actually want your computer to sound like a human, you have to think about which type of human you'd like it to imitate. You need to train it using data that is consistent with your choice. Martin O'Leary talks about this in his notes about bot ethics, in the section titled "You are what you eat".

Actually, who decides what sounds "human"? I've never seen an English-language bot use AAVE slang words. I'd love to see one, though! I believe that would make it sound more human to me. The most successful chat bots, the ones that win prizes, have personalities and linguistic habits that align very closely with western culture, or more specifically, white western tech culture. It's not hard to understand why, when so much AI research is being done in western tech communities. But it's bad that in this way, bots are just another tool for asserting the "normalcy" of that culture and supporting the assumption that the best way for a bot to sound human is for it to sound American.

There are exceptions, of course. Some bots are given personalities as aliens, like Izar (2nd place in the 2014 Loebner prize competition), or as robots, like Linguo (4th place in the 2012 Loebner prize competition). One notable case of a non-American, non-alien bot personality was Eugene Goostman, in 2008. Eugene was presented as a 13-year old Ukrainian boy and fooled one of the judges of the contest. But I'm not sure what that says about the bot or about the contest. From media coverage at the time, it seemed clear that the choice was made because children will more easily be forgiven if they can't answer questions, and because foreign children will more easily be forgiven if they make grammatical errors. I'm not too fond of this decision, but I also don't like how reporters claimed the developers were "cheating" because their bot was a 13-year old Ukrainian (instead of a twenty-something computer hacker living in San Francisco?). I'd be interested what other people's opinions on Eugene are.

I'm not aware of any chat bot competitions like the Loebner prize that focus on a different language than English. But why not? Why should everything anyone builds perpetuate the same things that have already been done so many times?

So let's change this! Most bots I know are English, but I know of a couple of French, Spanish, Irish and Finnish ones, I have a list of all the German ones I hear about, and a friend of mine recently had their students build some Dutch bots for a class project. If you're fluent in other languages, think about building bots that speak those languages! The tech community is still far from being diverse, but lots of people are working to improve that. Let's support those efforts and make the things we create more diverse, too.



If you'd like to talk to me about bot diversity, you can contact me via Twitter (@ojahnn) or send me an email (admin@enigmabrot.de)