27 views

Skip to first unread message

Dec 9, 2023, 12:03:47 PM12/9/23

to lmfit-py

Hello All,

I am a new user of lmfit, and I really appreciate what the module can do. However, I am now encountering an oddity when I try to fit some observational data, and I am hoping someone would be able to help me rectify my problem.

My data consists of x, y, and y_err, and I have reproduced them as arrays in the code below. I want to fit a powerlaw to this data, however due to the limitations of using a normal powerlaw (i.e. it must go through the origin) I instead want to take the log10 of my data and fit a line to it. In this way I can recover the exponent of the power law as the slope of the line, which for my purposes is all I really care about.

However, when I try to include my uncertainty as weights (using weights = 1.0/uncertainty) the fit does not look correct. Note that I perform the standard log correction for the uncertainty when the associated data is taken through a log, which is to say sigma_y = 0.434*(sigma_x/x). Below is a reproduction of my code, as well as the fit result.

import numpy as np

from lmfit.models import LinearModel

#Data

y= np.array([1.24, 4.47, 1.37, 2.14, 3.45, 9.51, 10.61, 1.19, 2.63, 6.16, 1.67, 2.07, 0.42, 1.59, 1.02, 1.73, 0.87, 1.8, 1.39, 1.53, 1.85, 2.67, 3.06, 3.19, 3.18, 3.32, 3.91, 3.51, 3.56, 4.13, 4.64, 4.87, 5.78, 4.82, 6.99, 7.32, 5.56, 6.87, 6.31, 7.08, 9.11, 8.26, 9.79, 7.73, 10.07, 9.75, 7.36, 9.46, 9.7, 9.64, 8.79, 12.3, 13.35, 15.17, 21.74])

y_err = np.array([0.38, 0.81, 0.39, 0.31, 1.36, 3.41, 1.08, 0.56, 0.17, 0.93, 0.35, 1.22, 0.28, 0.41, 0.4, 0.73, 0.23, 0.33, 0.33, 1.11, 0.5, 0.28, 2.75, 0.18, 0.9, 1.4, 0.31, 0.92, 0.89, 1.5, 1.09, 0.72, 1.84, 1.46, 3.7, 0.28, 0.99, 1.55, 0.13, 0.22, 3.35, 0.84, 1.27, 2.01, 0.47, 0.18, 0.41, 5.02, 2.3, 2.14, 1.7, 3.71, 1.81, 3.39, 0.37])

x = np.array([0.88, 53.82, 190.89, 19.68, 2596.36, 4727.29, 10613.36, 9.91, 60.91, 124.27, 4.98, 9.99, 58.34, 14.01, 0.72, 5.61, 244.41, 2.79, 10.01, 190.81, 12.32, 81.4, 4.47, 234.75, 2190.78, 2402.14, 15.16, 913.27, 4472.99, 631.83, 468.38, 427.17, 269.52, 166.19, 2.76, 162.4, 4371.17, 708.92, 661.6, 603.39, 295.53, 1998.01, 1203.92, 1952.53, 288.8, 576.23, 14811.47, 4174.44, 956.31, 8329.1, 158707.72, 22418.06, 2294.02, 6770.15, 45771.81])

#convert to logspace

y_log = np.log10(y)

x_log = np.log10(x)

y_err_log = 0.434*y_err/y

#Create lmfit Model

model = LinearModel()

#Set initial Params

params = model.make_params(slope=0.23, intercept = 0.6)

result = model.fit(y_log, params, x = x_log, weights = 1/(y_err_log))

print(result.fit_report())

result.plot()

It produces the following report:

[[Model]]

Model(linear)

[[Fit Statistics]]

# fitting method = leastsq

# function evals = 6

# data points = 55

# variables = 2

chi-square = 1189.68506

reduced chi-square = 22.4468879

Akaike info crit = 173.076089

Bayesian info crit = 177.090755

[[Variables]]

slope: 0.23227611 +/- 0.01830329 (7.88%) (init = 0.23)

intercept: 0.22926360 +/- 0.06166159 (26.90%) (init = 0.6)

[[Correlations]] (unreported correlations are < 0.100)

C(slope, intercept) = -0.956

And here is an image of the fit:

By eye I think it looks off, but curve_fit agrees with the above plot. For comparison, here is the fit when I remove the weights term:

I am a new user of lmfit, and I really appreciate what the module can do. However, I am now encountering an oddity when I try to fit some observational data, and I am hoping someone would be able to help me rectify my problem.

My data consists of x, y, and y_err, and I have reproduced them as arrays in the code below. I want to fit a powerlaw to this data, however due to the limitations of using a normal powerlaw (i.e. it must go through the origin) I instead want to take the log10 of my data and fit a line to it. In this way I can recover the exponent of the power law as the slope of the line, which for my purposes is all I really care about.

However, when I try to include my uncertainty as weights (using weights = 1.0/uncertainty) the fit does not look correct. Note that I perform the standard log correction for the uncertainty when the associated data is taken through a log, which is to say sigma_y = 0.434*(sigma_x/x). Below is a reproduction of my code, as well as the fit result.

import numpy as np

from lmfit.models import LinearModel

#Data

y= np.array([1.24, 4.47, 1.37, 2.14, 3.45, 9.51, 10.61, 1.19, 2.63, 6.16, 1.67, 2.07, 0.42, 1.59, 1.02, 1.73, 0.87, 1.8, 1.39, 1.53, 1.85, 2.67, 3.06, 3.19, 3.18, 3.32, 3.91, 3.51, 3.56, 4.13, 4.64, 4.87, 5.78, 4.82, 6.99, 7.32, 5.56, 6.87, 6.31, 7.08, 9.11, 8.26, 9.79, 7.73, 10.07, 9.75, 7.36, 9.46, 9.7, 9.64, 8.79, 12.3, 13.35, 15.17, 21.74])

y_err = np.array([0.38, 0.81, 0.39, 0.31, 1.36, 3.41, 1.08, 0.56, 0.17, 0.93, 0.35, 1.22, 0.28, 0.41, 0.4, 0.73, 0.23, 0.33, 0.33, 1.11, 0.5, 0.28, 2.75, 0.18, 0.9, 1.4, 0.31, 0.92, 0.89, 1.5, 1.09, 0.72, 1.84, 1.46, 3.7, 0.28, 0.99, 1.55, 0.13, 0.22, 3.35, 0.84, 1.27, 2.01, 0.47, 0.18, 0.41, 5.02, 2.3, 2.14, 1.7, 3.71, 1.81, 3.39, 0.37])

x = np.array([0.88, 53.82, 190.89, 19.68, 2596.36, 4727.29, 10613.36, 9.91, 60.91, 124.27, 4.98, 9.99, 58.34, 14.01, 0.72, 5.61, 244.41, 2.79, 10.01, 190.81, 12.32, 81.4, 4.47, 234.75, 2190.78, 2402.14, 15.16, 913.27, 4472.99, 631.83, 468.38, 427.17, 269.52, 166.19, 2.76, 162.4, 4371.17, 708.92, 661.6, 603.39, 295.53, 1998.01, 1203.92, 1952.53, 288.8, 576.23, 14811.47, 4174.44, 956.31, 8329.1, 158707.72, 22418.06, 2294.02, 6770.15, 45771.81])

#convert to logspace

y_log = np.log10(y)

x_log = np.log10(x)

y_err_log = 0.434*y_err/y

#Create lmfit Model

model = LinearModel()

#Set initial Params

params = model.make_params(slope=0.23, intercept = 0.6)

result = model.fit(y_log, params, x = x_log, weights = 1/(y_err_log))

print(result.fit_report())

result.plot()

It produces the following report:

[[Model]]

Model(linear)

[[Fit Statistics]]

# fitting method = leastsq

# function evals = 6

# data points = 55

# variables = 2

chi-square = 1189.68506

reduced chi-square = 22.4468879

Akaike info crit = 173.076089

Bayesian info crit = 177.090755

[[Variables]]

slope: 0.23227611 +/- 0.01830329 (7.88%) (init = 0.23)

intercept: 0.22926360 +/- 0.06166159 (26.90%) (init = 0.6)

[[Correlations]] (unreported correlations are < 0.100)

C(slope, intercept) = -0.956

And here is an image of the fit:

By eye I think it looks off, but curve_fit agrees with the above plot. For comparison, here is the fit when I remove the weights term:

This looks more like what I was expecting, but I can't leave off the uncertainty in my measurements. Am I taking an incorrect approach here, or is there something about this that I am missing? Any help would be appreciated, and please let me know if I can provide anything else.

Best Regards,

Best Regards,

Jake Miller

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu