I’ve been trying to pay attention to that since later 2016, when Google Translate got sort of good enough to use in some practical contexts. Within a year or two, many Japanese people I met who worked in multilingual or international contexts said that they used machine translation in their jobs. The most common use was to translate incoming emails and other documents from English into Japanese to get their gist. People used MT for outgoing documents as well, though the results wouldn’t have been very good. The quality of LLM-based MT in both directions is much better now, of course, especially if people know to use it well.
As I was involved in research on English education policy then, I was also interested in whether people could use interpreting apps to converse across language barriers. Around 2017, I did some experiments with pairs of graduate students from Japan, Poland, China, and one or two other countries, in which they would try to have a conversation, each speaking a language the other didn’t understand, using the speech-interpreting function of the Google Translate app. The conversations inevitably broke down after three or four turns, with one or both people unable to understand what the MT output was supposed to mean.
When I took a taxi in those days, I would ask the driver whether foreign passengers tried to communicate with them with language apps and, if so, how well it worked. I got mixed responses. Some drivers said that the apps were useful, while others said they often had trouble understanding what the app was saying. One problem seemed to be that most of the customers were Chinese tourists, they would say their destination using the Chinese readings of the kanji, and the app would be unable to convert those to the Japanese readings.
I haven’t tested speech interpretation apps lately, but they should be much better than in 2017. Last year, my two sisters and their husbands—all in their seventies—spent three weeks in Japan. None of them have ever studied Japanese. I spent about a third of that time with them, but the rest of their stay they were on their own. They had very few language problems. Once, I went with them to a drugstore to buy cold medicine. I was expecting to have to read the labels on the medicine, but they just pointed their cameras at the packages and were able to find what they wanted without my help. One day, they went into a café in Osaka and had a long, informative conversation with the elderly proprietor using the Google Translate app.
When I talk to Japanese people about AI, one of the most common ways they report using it is for translation. Just a couple of days ago, I led a discussion with some teachers at a university about AI, and one mentioned how some students in her English classes, instead of trying to read textbook passages themselves, take pictures of the texts and have ChatGPT etc. translate them into Japanese.
I did something similar myself today. I noticed a sign on a railway platform in Japanese, English, Korean, and Chinese. The English was fine, but I was curious how good the Korean and Chinese might be. So I took a picture of the sign, gave just the Korean and Chinese parts to Gemini, and asked Gemini how good the language was. I don’t know any Korean and I don’t know Chinese well enough to evaluate its naturalness, so I can’t assess Gemini’s assessment. But at least the difference between its ratings of the two languages is interesting:
Tom Gally