Hi and thanks for very good problem description :)
It seems that you have a typo(?) in the vocab setting of the nn-ensemble project: it "is vocab-pl" whereas in the base projects the vocab is "vocab-pl-lem". The vocabularies should be the same in the (nn)-ensemble and its base projects. I get a similar (but not exactly the same) error message about incompatible shapes as you if the vocabularies are not the same.
About the case for nodes=100 and warnings of memory: I think the reason for the process being killed is not necessarily running out of memory despite the warnings (although you seem to have quite big vocabulary, looking at the TensorFlow error about the incompatible shapes), but again just the differing vocabs. However, if the reason is memory and you have no other way around it, then instead of using a nn-ensemble you could try a regular ensemble, but with optimized weights for the base projects. The hyperopt command can be used for finding good weights, e.g.:
annif hyperopt nn-ensemble-pl-lem --trials 200 path/to/docs
For the record, using only one node in a nn-ensemble project is not helpful, as you probably knew, it makes the neural-network to work quite the same as a regular ensemble.
I noticed you have a non-default limit of 30 in the MLLM project but not in Bonsai: you could try a non-default limit also in Bonsai, and for both projects some higher value, even like 1000: when the suggestions from the base projects are combined using (nn-)ensemble, it can be advantageous to have many "base-suggestions" available.
-Juho