OK, so what I would recommend is that you build a custom analyzer, and for now, think of it as a black box. The analyzer takes in arbitrary text (as []byte), and ultimately produces a stream of tokens which should be indexed. So, in your case, we can think of this analyzer as taking arbitrary text as input, and producing a list of zero or more locations found in the text.
Now, there are many different ways that black box can be implemented. As you mentioned, the simplest option would be to use the existing tokenizer, to split the text into words, then look up each word in a list. This would be the opposite of the stop filter, instead of removing listed words, it just matches against a built-in list of known locations.
This simple implementation would work, but because each of the tokens is looked up individually, you lack context to identify the false positive. But, you could have a more complicated implementation that considers the other text around the location. With this you could have additional heuristics to improve the quality of the generated tokens. This code can be written in go and do whatever additional logic you want.
I would recommend you start by wiring up the simple individual token lookup based analyzer first. That will get you familiar with the structs and interfaces required, and you can see which data/parameters are available to work with.
NOTE: there is ongoing interest to add proper support for synonyms to Bleve, but it has not yet started
marty