I suspect that if you tried the old (`gensim.summarization`) sentence-splitter, you'd find its behavior similar to, or worse than, NLTK. It used a single regex for sentence-splitting, which you can view (& re-use if against all odds it works well on your corpus) from:
https://github.com/RaRe-Technologies/gensim/blob/3.8.3/gensim/summarization/textcleaner.py#L37
I'm unfamiliar with the `sentence-splitter` library you mention, but would also be sure to evaluate some of the options in `spaCy` – either the default sentence-segmentation of its `DependencyParser` or via its alternate `Sentencizer` class. See:
- Gordon