The "trials" file is usually part of the evaluation data, and specifies which enrolled speakers are compared with which test utterances, along with whether they are they are "target" trials or "nontarget" trials. E.g.,
spk-id-A utt-id-A target
spk-id-A utt-id-B nontarget
spk-id-A utt-id-C nontarget
spk-id-B utt-id-A nontarget
spk-id-B utt-id-B target
.
.
.
It's not something you need to train system, but rather to compute an error-rate (e.g., EER, or minDCF) on the corresponding evaluation dataset.
If you have some test recording called "utt-id" (for example), and you don't know the identity, you can create something like the "trials" file, but without the 3rd column. E.g.,
spk-id-A utt-id
spk-id-B utt-id
spk-id-C utt-id
spk-id-D utt-id
spk-id-E utt-id
.
.
Also, be aware that ivector-plda-scoring just provides log likelihood ratios, not binary same-speaker or different-speaker decisions. If you want binary decisions (and it's an open set problem), you'll still need to decide on a threshold (e.g., a log likelihood ratio above this score means "same-speaker", otherwise it's "different-speaker"). This will probably involve creating some kind of evaluation set using your own data.
Also, search the forums for the word "trials." You'll find that questions have been asked about it before.