Hi Lukasz,
I am working on my bachelor thesis and the topic of my thesis is to train an MT-model using transformer and tensor2tensor.
the basic idea is pretty simple:
- you send source text to MT
- you get initial version of the target text from MT
- as in many or some cases there is post-editing required (MT output not perfect or human translator has something different in mind, not 100% happy with the output), he will start to make corrections to the output.
- let's further assume he starts form left to right and encounters the first word he doesn't like or wants to change
- He will change the word
- next... the idea is to send corrections made by human translator and source text to MT-model
- MT-model will generate, hopefully, a better next version of the translation taking into account the corrections and the source text
No the question is, why should the next version of the output be better?
- My assumption is that, if i can train the model to (re)generate the corrections made by the human translator, the probability that the next words which will come after the corrections and will be generated by MT-model should be closer to the version that the human translator has in mind. In a way the human translator gives to the model a hint (corrections) in which "direction" the mt-model should translate.
- If I am not mistaken or did not fully misunderstand the model, the output layer of the model generates the next word by calculating a conditional probability: what's the probability for word yn+1 given source text and words y1,...,yn
- Now if the model learns to re-generate the words y1,..,yn (corrections made by human-translator) the probability that yn+1 is closer to the version which human translator has in mind should be higher?
I just wanted to know if my assumption, from a theoretical point of view makes sense to you or if you could point me to some papers which deal with this topic?
I have already achieved some results by training an English-German model. The model performs pretty good given the data I used and time I spent for training:
This is an exmple from a real localization project where MT has been used and post-editing was required:
Source:
"They can paint their own carnival mask and grow their own plants, Week 8 and 9."
MT-Output (from one of the big MT providers):
"Sie können ihre eigene Karnevalsmaske malen und ihre eigenen Pflanzen züchten, Woche 8 und 9."
Reference translation (after human post-edit):
"In den Wochen 8 und 9 können sie ihre eigene Karnevalsmaske malen und ihre eigenen Pflanzen züchten."
Now if you take the same source text and use the model I trained:
Initial version of MT-output (similar to the the example above):
Sie können ihre eigene Karneval-Maske malen und ihre eigenen Pflanzen züchten, Woche 8 und 9.
Now you make the following corrections, by just adding "In den" at the start of the initial mt-output or by modifying the first two words of the initial output and send the corrections and source text by pressing "Ctrl+Space" to MT-model:
In den Sie können ihre eigene Karneval-Maske malen und ihre eigenen Pflanzen züchten, Woche 8 und 9.
or
In den ihre eigene Karneval-Maske malen und ihre eigenen Pflanzen züchten, Woche 8 und 9.
next version of MT-output after making corrections "In den" (almost the same as the reference translation):
In den Wochen 8 und 9 können sie ihre eigene Karneval-Maske malen und ihre eigenen Pflanzen züchten.
Would be great to get some feedback from you, would help me a lot.
thanks a lot,
Arben