Let's work with some well-known text:
http://zh.wikipedia.org/wiki/%E5%86%8D%E5%88%A5%E5%BA%B7%E6%A9%8B
You'll get something like this:
http://i.imgur.com/goAlb.jpg
Tagxedo does a little bit of Chinese character analysis to figure out
likely "words". However, this is imprecise because, unlike Latin
languages, Chinese don't have delimiter between words. You'll figure
out the actual words by context. I think Tagxedo already does a
reasonable job but of course it is very hard.
Alternatively, if you have figured out all the words yourself (using
white space as delimiter), you can set "Apply Non-Latin Heuristics" to
"No". Tagxedo will interpret the text *as if* the Chinese characters
are English characters. Hence, a consecutive sequence of Chinese words
- an entire sentence - will be treated as one word. Which is fine if
you know what you're doing. Obviously, this cannot be the default.