As mentioned, gensim-3.8.3 wasn't tested/supported for Pythons later than Python-3.8. And, I believe any effort to make (obsolete, slower, buggier) gensim-3.8.3 would be wasteful compared to the likely similar-or-smaller effort it'd take to adapt older code to work with gensim-4.3.2 under whatever latest Python-3.x you like.
If you really needed to run gensim-3.8.3 for some reason, the most straightforward way is to choose an older Python-3.8 as your runtime, in a notebook server where you have that option – such as one run locally rather than subject to the version choices of some cloud service.
But now having seen fragments of your error & the code you're trying to run, I don't yet see any evidence that Gensim's behavior, & changes across versions, have anything to do with the error you're hitting. It's a tough to see everything that's going on – if forwarding stacks for others to help, it's best to (a) expand & show *all* frames (leaving none collapsed/hidden); & (b) paste tracebacks as text, rather than screenshots (which may trime details, be opaque to later indexing, etc).
Still, it looks like your notebook code inside `get_sentence_vec_avg()` is simply buggy, with respect to whatever data you're running it on. It tries to create a vector-average for some text where *none* of the 'words' are in the model. Thus, every word-lookup generates an exception,, prints "Not in vocab", and the `temp` variable is never initialized, even once. The code lacks any handling for this potential case, and assumes `temp` has something – generating the `UnboundLocalError` you see.
And I'd expect you to get the exact same error with gensim-3.8.3, because at the very high level at which you're using the Gensim `Word2Vec` class, essentially nothing has changed: the exact same "words" (tokens) will be in the model, or not. So you're barking up the wrong tree when trying to fix this with Gensim version twiddling.)
Separately, the `get_sentence_vec_avg()` function is kind-of-a-head-scratcher in other ways. For a sense of all its problems, I'll defer to ChatGPT-4, whose opinion I've attached as an output screenshot. (I've not checked its suggested code – which adds the convention that a 'sentence' with no words/known-words gets `None` instead of a vector, which other code would also need to handle – but its points about the original function look correct.) But the TLDR: the doesn't even do what its name claims – instead just clobbering the 'average' with the *last* 'word' in each 'sentence' – so it's hard to see how this code ever worked properly.
I'd be happy to help adapt this notebook to gensim-4+ – if any of its problems/errors were related to that. But it's got more foundational problems.
Are you perhaps running it with a tinier test amount of data than the original paper authors, and thus hitting exceptions (no known words) they didn't? If so, you may dodge those errors with different data, but you'd still have the problem it's not really doing an average where it claims to be.
(As one last aside: the notebook shows using a `min_count=1`, which is almost always a mistake when using `Word2Vec` on natural-language texts: the algorithm's performance, & downstream uses, tend to do *better* when rare words are ignored/discarded.)
- Gordon