Hi Bhargav,
I have tried both but am still getting the same bad assignments. To debug the problem, I took a very small corpus (5 docs where 3 were about cars and 2 were about food) and ran LDA with num_topics = 2. While the topics were good, one thing I noticed (with both lda[vector] and lda.get_document_topics(vector)) was that every doc was getting the same document-topic probability distribution. For some reason, I was getting "[(0, 0.083333335604447431), (1, 0.91666666439555256)]" for every single document.
I checked the corresponding corpus_tfidf and corpus_bow vectors for all the documents and they were all different (like they should be) but that was not true for vectors in corpus_lda. I do get different document-topic distributions with a huge corpus though but I am beginning to think if there's something fundamentally wrong with what I am doing or how gensim's LDA is working. Any ideas?
Any help will be appreciated. Thank you so much!