Hi Bachir,
Sorry for the late reply.
I think multiplying the data by this vector could work if we fit over low and/or small r-ranges, but Peter and I had discussed it, and I think it comes down to this:
At high-r, peaks become weak due to the instrument, these peaks then contribute less to the cumulative and total residual, adding an element of "uncertainty" to what we would call the average structure (large-r) description, arrived at by fitting the data.
If we alter the data such that these high-r peaks are on the same scale as the low-r peaks, we are effectively artificially removing this uncertainty. These high-r peaks now contribute an artificially larger amount to the cumulative and total residual. This could pull the fit to a false solution, especially if the local (low-r) structure in the real material is different than the average (high-r) structure.
What I'm imagining is a damping (what we call it) vector which is always fixed in your refinement. Users would establish this instrumental effect on different, known standard material, and then keep it fixed and unchanging in their fitting of their unknown.
Would it be difficult to reintroduce the previous version of the this scaling vector, and would it be expensive to compute? It would certainly be useful for us.:
Below is an example of the effect, over the whole range:
low-r:

and high r:

Regards,
Rob