Yep, global pooling is also how KataGo already handles multiple board sizes with a single model. Convolution weights don't care about board size at all since the filter dimensions have no dependence on it, so the only issue is if you're using things like fully-connected layers, and those are often easily replaceable with global pooling and convolution, making the whole net board-size-independent. (Although you still need to train/finetune appropriately).
Nice to see the same idea finally making its way around in papers too. Along with all the experiments that projects like LC0 and such are trying in different runs, it feels like the state of formal published research in this area is often one step behind what major projects are already successfully doing.