See the documentation for the parameters `window` & `shrink_window` for ways to arrange for a much-larger (including full-length-of-text) window where all words have equal weight.
In particular, if `window` is larger than your largest text (and the `Word2Vec`/`Doc2Vec`/`FastText` models only support 10,000-token texts), every word could be considered for every context.
And, if `shrink_windows` is turned off (set to `0` or `False`), then the default technique of always using some random smaller-than-`window` window each context, as an efficient way to weight nearer words more, will be turned off – meaning every word within `window` token-positions will be considered an equal part of the context.
Note that such large & never-shrunk windows will result in relatively-longer runtimes.
- Gordon