I think the answer is no, it is not strictly necessary to call the setup function again provided you do the initial setup in the correct way.
The purpose of the setup function is to make a symbolic factorisation of the KKT matrix (which has P and A as submatrices) and to allocate memory for its factors. In order to do this we perform a symbolic factorisation during setup based on
the location of all *potential* non-zeros in the data matrices.
You can therefore initialise your problem with A and P having non-zero values in *every position* in which a non-zero could conceivably appear in your problem. If you do the setup in OSQP using these matrices, then it will not matter if later
on you change some of these values to be numerically zero or nonzero.
The reason this works is that what counts as ’nonzero’ in the P and A matrices is determined only by the rowidx and colptr data (i.e. the ’symbolic sparsity’) during the symbolic factorisation. Whether the numerical value of some of those entries
is actually zero or not is not important during setup.
I would strongly advise doing the above only in the case that a very limited set of entries might change from zero to nonzero (or back). If the sparsity of P or A changes drastically, then it is almost certainly preferable to run setup again
with the new data. The reason is that the symbolic sparsity pattern of the KKT factors is dictated by the symbolic sparsity of P and A. If you create these matrices with many points that are “maybe zeros” then your KKT factors will have a lot of fill-in
(i.e. low sparsity), making the solve times slower than they would be otherwise. I do not think it is possible to make a general statement about how many entries counts as a “limited set of entries” because it will be strongly problem specific.
Goran’s statement below about providing initial primal and dual solutions works in either case.