How to fit function parameters to data?

Skip to first unread message

Nico Stuurman

Feb 19, 2019, 1:39:27 PM2/19/19
to DDogleg
ddogleg looks like a great alternative to the apache maths common library that I have been using for years (and never could wrap my head around).  However, I have trouble figuring out how to get started (or being sure that this the correct library for what I try to do),  doubtlessly caused by my general lack of understanding this subject.

I would like to fit 2D data (x-y point pairs) to a given function (in this case f(x) = A * (1 - exp(-k*x)) ).  In the apache commons library I would use a Simplex optimizer to minimize the least square error between the function and the data (optimizing the values of A and k).  Is ddogleg the appropriate tool to do this and how would I go about using it for this problem?

Thanks!  - Nico

Nico Stuurman

Feb 19, 2019, 9:03:21 PM2/19/19
to DDogleg
Hold on!  I think that has all the answers.  Let me play with that, and I'll hopefully figure it out (or at least have a better question to ask;)

Nico Stuurman

Feb 20, 2019, 2:00:15 PM2/20/19
to DDogleg
Wonderful!  I have my proof of concept working.  The syntax looks very nice and straight forward.  I'll pase the code here in case it is useful to anyone:

package edu.valelab.testddogleg;

import org.ddogleg.optimization.FactoryOptimization;
import org.ddogleg.optimization.UnconstrainedLeastSquares;
import org.ddogleg.optimization.UtilOptimize;
import org.ddogleg.optimization.functions.FunctionNtoM;

import java.awt.geom.Point2D;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;

public class Main {

* Function to be fitted. This particular function is: F(x) = A * (1 - exp(-k * x))
private static class RecoveryFunction implements FunctionNtoM {

List<Point2D> data_;

* @param data actual observations. here a list of x-y values
RecoveryFunction(List<Point2D> data) {
this.data_ = data;

* Process function returns the error between the data point and the function value for that point
* @param input: function parameters to be fitted (here: A and k)
* @param output: residual error, here: root of the square of the difference between function value and
* measured value
public void process(double[] input, double[] output) {
double a = input[0];
double k = input[1];

for (int i = 0; i < data_.size(); i++) {
Point2D p = data_.get(i);

double y = a * ( 1 - Math.exp(-k * p.getX()));
output[i] = Math.sqrt( (p.getY() - y) * (p.getY() - y) );


public int getNumOfInputsN() {
return 2;

public int getNumOfOutputsM() {
return data_.size();

public static void main(String[] args) {
// define a line in 2D space as the tangent from the origin
double a = 0.5;
double k = 0.2;

// randomly generate points along the line
Random rand = new Random(234);
List<Point2D> points = new ArrayList<>();
for( int i = 1; i < 20; i++ ) {
double val = a * (1 - Math.exp(-k * (double) i));
// add noise to the measured value
double t = (rand.nextDouble()-0.5) / 10.0;
points.add( new Point2D.Double((double) i, val + t * val) );

// Define the function being optimized and create the optimizer
FunctionNtoM func = new RecoveryFunction(points);
UnconstrainedLeastSquares<DMatrixRMaj> optimizer = FactoryOptimization.levenbergMarquardt(null, true);

// Send to standard out progress information

// if no jacobian is specified it will be computed numerically

// provide it an extremely crude initial estimate of the equation
optimizer.initialize(new double[]{1.0,1.0},1e-12,1e-12);

// iterate 500 times or until it converges.
// Manually iteration is possible too if more control over is required

double[] found = optimizer.getParameters();

// see how accurately it found the solution
System.out.println("Final Error = "+optimizer.getFunctionValue());

// Compare the actual parameters to the found parameters
System.out.printf("Actual A %5.2f found %5.2f\n",a,found[0]);
System.out.printf("Actual k %5.2f found %5.2f\n",k,found[1]);

Nico Stuurman

Feb 20, 2019, 8:16:26 PM2/20/19
to DDogleg
As Peter pointed out by email, the process function should not return the root of the square of the error, rather the error itself:

output[i] = p.getY() - y;

indeed works a lot better. Also, by changing to the cauchy optimizer (FactoryOptimization.cauchy(null);),
the range of initial guesses that still lead to the correct answer increases quite abit.
Reply all
Reply to author
0 new messages