No, it's not a hard requirement, and if we worked in infinite precision it wouldn't matter. On the other hand, having models with both extremely large and small magnitudes (say 1e10 to 1E-10) is probably not going to work well in practice. Physicists often build such model, and almost always have to rescale their data to get a usable solution. It's very similar to just solving linear systems of equations or linear least-squares problems in finite precision using Matlab; if the data-matrices are too ill-conditioned, the factorization will break down, or the solution will be extremely sensitive to perturbations.
What every user of CVXOPT or other software packages should know is the limitations of computations in double precision floating point.
1) If a model can be represented using data ranging from 1 to 1e-6, rather that 1e10 to 1e-10, then naturally go for the former.
2) Never build models with linear dependencies.
3) If you feel your model is OK, but the solver still stalls, then you need to massage your model to make the solver solve it. Some of the massaging-tricks could be normalization like Martin suggested. But the solver cannot figure out the best way to massage the data - that's up to the user.