Hi Xi Xi,
One of the biggest issues is which solver and settings you're using. We've generally found that the commercial solvers (Gurobi, CPLEX, Mosek, Cardinal) run 10-100+ times faster than the open solvers (glpk, cbc) although HiGHS is closing that
gap a bit. We usually find the barrier algorithm is fastest, but with few cores you may find simplex or dual simplex is faster.
Is the long wait time in Python/Switch (before you start seeing text from the solver itself) or in the solver?
You might also check how much RAM you have -- that is a medium size model and I would expect it to need about 8 GB in the Python stages and around 2 GB in the solver stage. If you have much less than that, it can slow work down as it swaps to
disk.
Another common option is to reduce the problem size. I've found solver time is roughly cubic with respect to the number of timepoints or load zones in the model. So reducing those can help a lot. I would generally aggregate to larger zones (10-25
total?) and run with more sample days per year (12+) and fewer sample years (4-6 total). If that is still too big, I would switch the timepoints to every 2 or 3 hours instead of hourly. That size of model is usually tractable with the commercial solvers with
16 GB RAM or so. But with the open solvers you generally need a much smaller model (generally only toy models will work with GLPK or CBC, but HiGHS may solve a medium size model if you are patient).
I hope this helps. Let me know if you have more questions.