Ray Jackendoff's wonderful book Foundations of Language[1], for
example, talks about a phonological structure, syntactic structure and
semantic/conceptual structure and further divides the phonological
structure into a segmental structure, syllabic structure, prosodic
structure and morphophonological structure.
As I suggested at the workshop, and as Jackendoff himself admits,
there's far more agreement amongst linguists as to the lower levels.
I can conceive of how HTMs might model the phonological structures
quite easily, the syntactic structures not so much and the semantic/
conceptual structures hardly at all :-)
For this reason, I think the best starting point in trying to apply
HTM to linguistics is series of experiments on structures within the
area of phonetics, phonology and morphophonology.
One can imagine tasks such as close phonetic transcription from an
audio signal (with speaker invariance), abstract phonological
segmentation from a close phonetic transcription (with accent
invariance), morphological tagging (with inflectional class
invariance) and part-of-speech tagging.
All of these could be attempted as individual projects but ultimately
the output of one level could be the input to the next.
Personally, I think feedback is going to be crucial for all of this.
Of course, a ton of questions arise even just from this set of tasks.
One big question I have, which starts to come to the fore as you go up
levels, is the role of the lexicon.
James
[1] if I had to recommend one book on linguistics for people
interested in the application of HTM to language it would be this one.