The answer depends on how one interprets the intent of the Turing Test.
The Turing Test (originally called the Imitation Game), was proposed on
the assumption that, given the large number of possible input-output
pairs in human conversation, nobody is going to be able to specifically
predict what the conversation inputs in the test will be or to
pre-program a machine with such a large corpus of examples that it will
likely include whatever will be inputted in the test, and that
therefore, if a machine is able to imitate conversation, it must have a
function akin to intelligence, i.e., a function of generating a large
number of appropriate outputs on the basis of a small amount of input data.
However, subsequent developments in the field of information processing
have to a significant extent defeated the implicit assumption of the
Turing Test: a corpus of a large language model may now contain a
sufficient number of examples to pass the test solely on the basis of
statistical pattern matching, without any function analogous to
intelligence.
Thus, if the intent of the Turing Test, as applied to translation, is
understood to be the demonstration of an ability to produce a
translation on the basis of a small amount of input from which the
translation output cannot be predicted by e.g. statistical means, then
GPT has not passed the Turing Test for translation.
If the intent of the Turing Test for translation is understood as the
ability to output a translation that sometimes or oftentimes may be
deemed a valid translation and/or taken by some judge to be a human
translation, regardless of how the translation was generated, then GPT
probably has passed the Turing Test.
Herman Kahn