Yeah, we should really have a default constructor or something at least. P is your go-to for maximum cardinality and error bounds. It is basically the same as it is for HLL. In practice, it is also "better" in a "measured, but not well understood" sense, but can't give much guidance there.
The HLL constructors also have support for automatic error bound as a double -> p conversion, so we could add something like that here.
As for "sp", it will affect how the sparse set behaves in a lot of subtle ways. A decent rule of thumb might be that higher sp gives higher accuracy in exchange for more space at low cardinalities, but even that might be non-linear. More "sp" means less "collisions", and therefore a more saturated sparse set for a given "p". The space consumption is also amplified in serial form relative to in-memory because we use fixed-width integers (int[]) in memory and variable length integers for serialization.
Conversion between the two is defined strictly by "p" and sparse set saturation, so again, higher "sp" will delay conversion to normal mode, but with an interesting cost. It is clearly more accurate than a lower "sp" HLL++ when they are both in sparse mode, but since sparse mode is guaranteed to be at least as accurate as the corresponding normal mode (for less space), and the higher "sp" HLL++ will convert first, there may be an interval for which the higher "sp" HLL++ would have benefited more from degrading its "sp" value rather than doing a full -> normal conversion. However, it is likely to be a needless expense for most cases (as cardinalities will not naturally lie closer to exactly the conversion point than other places).
tl;dr: 14,25 is pretty okay