Instrumental resolution

58 views
Skip to first unread message

Robert Koch

unread,
Feb 21, 2020, 4:46:02 PM2/21/20
to fullrmc

Hi everyone,

We've been trying to use fullRMC for crystalline materials. These types of experiments often have poor Q-space resolution, which manifests in the experimental PDF G(r) as a decay in the peak intensity with increasing r.

Does anyone know if there is any existing functionality in fullRMC that would allow us to multiply the computed PDF by an array of identical shape to mimic this effect? We could also divide the experimental data, but wherever possible we try to muddle with the experimental data as little as possible.

Cheers!
Rob

Bachir Aoun

unread,
Feb 21, 2020, 10:12:23 PM2/21/20
to fullrmc
Hi Robert,

You should use a window function that is actually used as a convolution function to mimic your experimental resolution.

regards
Message has been deleted

Robert Koch

unread,
Feb 22, 2020, 9:09:36 AM2/22/20
to fullrmc
Hi Bachir,

Thank you! We do use a window function, which accounts for the limited range we measure over in Q-space (multiplication in Q-space by a hat-function) by a convolution in r-space with the hat function's Fourier Transform.

Our Q-space data is composed of crystalline peaks, however. The peaks, as a result of finite Q-space instrument resolution, are convolved, in Q-space, with a Gaussian instrument function. The result in r-space is a multiplication of the entire PDF by the Fourier Transform of the Gaussian instrument function from Q-space.

So, I do get that we can use the window function to handle the limited range we measure over in Q-space, but in order to handle the effect of the convolution of Gaussian instrument function in Q-space, point-wise multiplication of the PDF is necessary.

I think this is not really an issue in materials lacking order for r > 50 Angstrom, but in crystalline materials, the PDF should show signal up to the crystalline domain size, sometimes on the order of microns. In practice, because of this convolution of Gaussian instrument function in Q-space, we typically do not see any PDF signal beyond say 80 Angstrom when looking at crystalline materials in a typical geometry.

Is there any existing way to inject a simple array multiply into the PDF calculation? 

Cheers,
Rob

Bachir Aoun

unread,
Feb 22, 2020, 9:45:55 AM2/22/20
to fullrmc
Hi Robert,
Thank you for the explanation.
In earlier fullrmc version i had both implemented the window function and a normalization vector that does point wise multiplication of the total pdf or sq as a final computation step prior to comparing with the experimental data.
Later on i removed this extra step and i also thought about removing the window function too but i didn't. Those are passive repetitive computing actions that are done at every and each engine step. Convolution is less trivial and not everyone knows how to do. But as for the normalization vector instead multiplying the model total one should use to divide the experimental data instead prior to creating the constraint.
You can also use a scale factor or let fullrmc fit one for you at a certain engine steps frequency.

Maybe i did a mistake removing the normalization vector, Do you think i should
put it back?

Regards

Robert Koch

unread,
Feb 23, 2020, 8:49:21 PM2/23/20
to fullrmc
Hi Bachir,

Sorry for the late reply.

I think multiplying the data by this vector could work if we fit over low and/or small r-ranges, but Peter and I had discussed it, and I think it comes down to this:

At high-r, peaks become weak due to the instrument, these peaks then contribute less to the cumulative and total residual, adding an element of "uncertainty" to what we would call the average structure (large-r) description, arrived at by fitting the data.

If we alter the data such that these high-r peaks are on the same scale as the low-r peaks, we are effectively artificially removing this uncertainty. These high-r peaks now contribute an artificially larger amount to the cumulative and total residual. This could pull the fit to a false solution, especially if the local (low-r) structure in the real material is different than the average (high-r) structure.

What I'm imagining is a damping (what we call it) vector which is always fixed in your refinement. Users would establish this instrumental effect on  different, known standard material, and then keep it fixed and unchanging in their fitting of their unknown.

Would it be difficult to reintroduce the previous version of the this scaling vector, and would it be expensive to compute? It would certainly be useful for us.:

Below is an example of the effect, over the whole range:
full.png














low-r:

low.png
















and high r:

high.png

















Regards,
Rob

Bachir Aoun

unread,
Feb 24, 2020, 4:37:29 AM2/24/20
to fullrmc
Hi Robert,
Maybe i am not understanding yoir request correctly. Are you trying to set a r or q dependent vector to alter the total error computation? So what you'd like to do is for instance to have high r data contribute less to the model total square error. Am I understanding this right?

If so this is totally feasible now in fullrmc and that's done using the dataWeight property of the experimental constraint. This adds some computation overhead but not much you shouldn't worry about it

Regards

Regards

Robert Koch

unread,
Feb 24, 2020, 10:24:05 AM2/24/20
to fullrmc
Sorry to confuse! It was late Sunday and I was a bit tired. My discussion of the error contribution was only to highlight that scaling the experimental data could cause fitting issues, and ideally we'd like to scale them model.

We are trying to set an r-dependent vector to scale the computed model. The snapshots I attached are simulated PDF data with and without this vector correction (computed by another program), just to highlight the effect.

Cheers,
Rob

Bachir Aoun

unread,
Feb 24, 2020, 1:10:22 PM2/24/20
to fullrmc
Hi Robert,

No worries i might be slow to understand too :)

so what you are trying to accomplish is to have a r or q dependent fixed scaling vector that point multiplies the created model constraint total output prior to comparing  it to the experimental data and computing the error ... If this time i got it right, then this is what i had before and i removed. 
  • Initially i had this functionality because I naively thought that a user can post correct experimental data processing in fullrmc upon doing the stochastic fitting. Later I find out that this is a very dangerous feature to have. Some users used this feature to manually manipulate the model to fit the experimental data. I am sure their intension was not to cheat but having this option available let people think that this is scientific and OK to do.
  • An OK and scientific damping to do is a scalar multiplication across all the data points but not a vector that alters the short and long range ordering differently.
Still with what you described, I don't see the different between multiplying the model output with a r dependent vector an dividing the experimental data or vice versa (divide the model and multiply the experimental data). 

With all of that being said, if you are trying to use the same model to fit 2 constraints (e.g. pdf experimental data) of the same material collected under the same conditions but on different instruments with experimental resolution than this is totally feasible. Just create the two constraints, add those to the Engine and set each pdf to a constraint. 

If i am still not understanding your question. I suggest that we have a quick 30 min call ...

thanks

Robert Koch

unread,
Feb 24, 2020, 2:58:30 PM2/24/20
to fullrmc
Hi Bachir,

A call might be good to iron out the finer points. could we do it tomorrow sometime? You can email me directly if you want.

Cheers,
Rob

Robert Koch

unread,
Feb 27, 2020, 12:02:02 PM2/27/20
to fullrmc
Hi Everyone,

We decided that, based on how fullRMC calculated the residual, the best route would be to correct the experimental data.

cheers,
Rob
Reply all
Reply to author
Forward
0 new messages