(moving conversation over from
https://github.com/transifex/transifex/issues/65 at mpessas' request)
I've developed a system for providing a screenshot to translators to
illustrate the context in which a message is used in the program, and
wanted to help integrate it into Transifex if there was sufficient
interest in such a feature. Basically, the way it would work is, the
developer can upload some screenshots in addition to the message
template file, and they would automatically be matched to messages, so
that when the translator is translating a message, a screenshot will
be shown.
The system works by OCR-ing screenshots and then matching each message
to the best-matching screenshot. If interested in more details, I've
published a paper on it in CHI 2012 (
http://groups.csail.mit.edu/uid/other-pubs/chi2012-screenshots-for-translation-context.pdf
; the "Message-Screenshot Matcher" subsection is the part which
describes the algorithm)
In terms of code, the prototype implementation I used for the paper is
MIT-licensed, and is written in Java, is at
https://github.com/gkovacs/textmatch . I realize Java isn't very
popular in the FOSS world, so I can reimplement the algorithm in
another language (python, C, whatever's most convenient) if needed. I
was primarily interested if there was sufficient interest in
integrating such a feature into Transifex (wouldn't want to work on
this for several months only to have the work rejected due to
integration issues), and if yes, I'd appreciate specific steps that I
should be taking to do so.
Thanks,
Geza