There is no hard limit on the size of a problem that you can handle (apart from running out of memory). The time to process the model and emit it to the solver generally grows smoothly with the size of the model. The thing that you have to be cognizant of is that there are always multiple ways to express constraints and variables, and while they should all result in the same final optimization problem, they can take very different paths through the code to get there. So, in this case I would second Gabe’s comment about restructuring your model to leverage sets and sums over sets – instead of using machine-generated expanded expressions. The other thing to keep an eye on is if any of your constraints are triggering expression “cloning”.
The largest model I have ever personally generated had a little over 13 million variables and 21 million constraints. It took Pyomo about 45 minutes to generate the model and then another 70 minutes to perform various automated model transformations (primary connector expansion and GDP relaxations). My notes had another ~30 minutes in other activities like writing out the LP file. Note that this was all done almost 3 years ago with Coopr 3.3 – so things should be different (better) now.