There are at least a few different similarity/distance measures that can be applied to two different vectors. In this case, your vectors are topic models, which are composed of <word, weight> pairs.
See here for a list of different similarity measures:
https://en.wikipedia.org/wiki/Category:Similarity_and_distance_measuresA common one to use for topic models is cosine similarity, but you could choose another based on your needs. For LDA topic models, cosine similarity will yield a score in the range [0,1], with 0 meaning dissimilar and 1 meaning identical. There is some hand-waving going on here, inasmuch as cosine similarity doesn't take magnitudes of lines into account. In other words, there is no difference in the angle made by the lines described by endpoints ((0,0), (1,0)) and ((0,0), (2,0)) on an x,y Cartesian plane, both have angle 0 degrees, and cosine of 1, so their similarity will be 1, although the first line has length 1 and the second line has length 2. Another issue: Let's say you have one topic model vector: <(boat, 0.50), (water, 0.50), (sunken, 0)> and another topic model vector <(boat, 0.33), (water, 0.33), and (sunken, 0.33)>. Note the significant difference for the word "sunken" - the first topic model vector essentially doesn't include "sunken" at all, while the second one does. It is up to you whether or not you want to count that as a difference, or how much you want to count it as a difference. All similarity measurements are statistical measures that should not be divorced from your reasoning about questions and answers. You may want to experiment with different similarity measures on small, toy topic model vectors that you make up, or choose more than one similarity measure. This is the intersection of art and statistics.
If you have only one corpus and 100 different topic models for that corpus, and want to compare each topic model to each other topic model within that single corpus, that would result in 100 * (100 + 1) / 2 unique topic model comparisons. You might assume that LDA topic model comparisons within the same corpus should yield very little overlap, but you might be surprised to find the contrary.
If you have two corpora and 100 different topic models for each corpus, and want to compare each topic model in the first corpus to each other topic model within the second corpus, that would result in 100 * 100 unique topic model comparisons.
After you have a full set of comparisons, you could threshold and then rank-order the similarities. For example, choose a similarity threshold of 0.85, meaning that 0.85 is "similar enough" to warrant calling two topic models similar in colloquial terms, and then rank order similarities like "Topic 1 from Corpus A is most similar to Topic 27 from Corpus B", etc. Something more complex: determine presence or absence of topic model clusters.