Are you sure the word 'senseless' appears in your data at least `min_count=5` times? These algorithms ignore rarer words, because that usually improves results. (And, it's generally better to focus evaluations on the surviving words, or gather more data, rather than make `min_count` very low.)
Also, it's generally a good idea to set logging to at least the INFO level, & watch the output for anything anomalous. (Just watching the logged output teaches a lot about the steps, and if anything looks amiss – some total seems off compared to what you think your corpus/vocabulary contains, or some step completes anomalously fast, etc – it's good to dig deeper.)
Separately, I'm assuming your `train_documents1` has exactly as many texts as `tagged_data1` – so you might as well use `total_examples=len(tagged_data1)` to guarantee consistency. And, as you don't show where you set `cores`, note that a bad value there – like `0` or `-1` – could result in no training.
- Gordon