The methodology comes from Tarski's semantic theory of truth.
There are certain theoretical strengths of the approach. A T sentence
BTW, is a a theorem of the form S is true <=> P. The two strengths are:
1. A theory of machine translation for free.
Suppose you have English and French and you give the semantics of both in
a common metalanguage. Given an English sentence S and a French sentence
S*, you can prove they are semantically equivalent by proving
|- S is true iff p.
|- S* is true iff p
hence
|- S is true <=> S* is true
2. Machine Learning for free.
You can ask question like 'Given this unknown word, to what lexical category must
it belong for the context in which it occurs to be a sentence?' This requires a light
reformulation of what I presented.
It's theoretically elegant but a full scale model for English would require an axiom
base of around 250,000 axioms! Nobody has ever tried to build a first-order theory
of that size.
M.