After viewing the video of Tom’s Zoom session on AI (thanks very much, Tom!), I have done a few trials of translation using GPT-4.
One of them was comparing GPT-4’s translation of a 日経 article on Kishi’s plan for increasing the supply of hydrogen as a non-carbon-based fuel with DeepL’s. This turned up a couple of things that I found interesting. This is only a glimpse into the subject, of course.
(1) The article referred to 脱炭素燃料, which both translations called “decarbonized fuel.” I certainly wouldn’t call hydrogen “decarbonized,” and neither would any other competent human translator, since hydrogen obviously never contains carbon in the first place. This is an example of these translating machines not knowing anything about the real world, I think.
(2) More seriously, DeepL fouled up a mention of a year, but GPT-4 got it right.
The article says:
骨子では30年ごろの商用での実用化を目指し、サプライチェーンの構築を政府が後押しする方針も記した。
GPT-4 says: “The outline sets a policy to aim for commercial practicality around 2030 and have the government support supply chain construction,” which I think is OK. But DeepL has this: “The framework also states that the government will support the establishment of a supply chain with the aim of commercializing hydrogen around 1930.”
Come again??!! Why did it jump back a century?
What happened here, perhaps, is that 日経 made the original mistake with “30年ごろ,” which should have been written “2030年ごろ,” it seems to me.
Back at the beginning of the article, it mentioned the year “2040年”: “2040年に現状の6倍の1200万トン程度に増やす方向で調整する。” And again, it’s in the 21st century: “…2017年に決めた「水素基本戦略」を改定する意向を表明した。同日の会議で骨子を示した。” So the century we’re talking about is clearly established.
Then we see: “水素の現状の供給量は年間約200万トン。30年に300万トン、50年に2000万トンを目指す方針を掲げてきた。” “年間” is OK, but then it gets sloppy with the year numbers: “30年に300万トン、50年に2000万トン…”
Apparently, GPT-4 grasped that the century under discussion is the 21st, but DeepL, for some reason, slipped back to the 20th. I think it got confused by the intervening "30年" and "50年," and for some reason made this mistake about the centuries.
This looks to me an example of a point that people make about the two apps, that DeepL tends to go sentence by sentence, whereas GPT-4 is able to put the sentences into their whole context. But I would suggest that, in any case, a human being needs to be paying attention in order to decide which machine translation is right. You can’t blindly trust any machine.
Jon J.
Dylan,
Thank you for your post on Honyaku with the additional clarification.
Although no one responded to what I wrote, I think that I made a somewhat OK guess by suggesting "carbon-free fuel" based on experience and intuition, even though the fuel itself is synthesized from CO2 and H2, which makes my suggestion of "carbon-free" illogical in Jon's short example. After reading your post, however, I now know the greater context, and I should have suggested something like "recycled carbon-containing fuel" prepared by the synthesis of CO2 and H2.
Hopefully, human translators will still have jobs a few years from now, but they will have to bring to the table the kind of research skill you have demonstrated, a good sense of the writer's intention (because not everyone is a good writer), and the ability to write clearly in the target language, in addition to the ability to utilize something like GPT and similar AI products as the need arises.
More power to you,
John Stroman
--
You received this message because you are subscribed to the Google Groups "Honyaku E<>J translation list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to honyaku+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/honyaku/1487874b-6f19-45c4-ae2d-7b7e75891a27n%40googlegroups.com.