In order to classify sentences in multiple languages, we need multilingual word embeddings (All languages in single vector space). Now the question is why do you want to do it? why not separate models for separate language? The answer to that is if I have less data for a single language, it would be beneficial to include data from other languages in order to make the model more effective.
I am finding it hard to get any tool that would help me do it. Yes, I know that we can create word embedding on the go while training a network but there comes another fine theory. If I don't have enough data for one language who well the vectors are? Hence what I decided was to use something similar to the original data but which has huge data points.
There are tools like facebooks MUSE but they don't align multiple languages into a single vector space.
It would be helpful if the community can help me here. Any further questions or suggestions are welcome here
Have already looked into fastText vector alignment. They allow 2 languages.