just to make sure: gmm = generalized method of moments
What kind of constraints, just bounds on the parameters, or more
complicated constraints?
Do you expect the constraints to be binding, or are they just to keep
the optimizer in the correct range?
I don't think it would be difficult to add constraints either through
reparameterization or by using a constraint solver, l-bfgs, cobyla,
slsqp. I think we have examples for both already in statsmodels.
The main problem if the constraints might be binding is in the
statistical theory. My recent answer on
scipy-user mailing list
http://mail.scipy.org/pipermail/scipy-user/2012-February/031585.html
I know that in this case the distribution of the parameters is
different than in the standard interior solution case, but I never
looked closely enough at these problems to tell what we could do with
parameters on the boundary of the parameter space.
If reparameterization, for example, works and the optimization
converges, then there is no problem, I think.
Josef
What kind of model are you looking at? Until now GMM in statsmodels is
mostly the generic framework with instrumental variable in linear
model as the only worked out case.
>
>>
>> What kind of constraints, just bounds on the parameters, or more
>> complicated constraints?
>
> probably linear constraints like a1 theta1 + a2 theta2 >= 0, possibly
> nonlinear constraints like f(theta1, theta2) >= 0.
>
>> Do you expect the constraints to be binding, or are they just to keep
>> the optimizer in the correct range?
>
> I know the unconstrained solution does not satisfy the inequality, I
> guess it's possible the constrained solution would be a local minimum
> in the interior of the parameter space. Ultimately, I am unsure
> whether the constraints will be binding.
>
>>
>> I don't think it would be difficult to add constraints either through
>> reparameterization or by using a constraint solver, l-bfgs, cobyla,
>> slsqp. I think we have examples for both already in statsmodels.
>
> Do you have any suggestions on how to locate these examples?
I need to look, we have reparameterization at several places one is
ARMA to impose stationarity, discrete models use exp, logit (or links
in GLM) to impose that the distribution parameter is in the required
range,
I'm not sure Skipper is using the constraint solvers with constraints.
>
>>
>> The main problem if the constraints might be binding is in the
>> statistical theory. My recent answer on
>> scipy-user mailing listhttp://mail.scipy.org/pipermail/scipy-user/2012-February/031585.html
>
> This is very interesting, and good to know that the standard
> statistics do not apply if constraints are binding. If my estimated
> parameters land on the constraints I can look into how one should
> calculate the appropriate statistics.
I briefly skimmed some papers by Donald Andrews, and they are no fun ;)
It might require some work to find an understandable paper.
>
>>
>> I know that in this case the distribution of the parameters is
>> different than in the standard interior solution case, but I never
>> looked closely enough at these problems to tell what we could do with
>> parameters on the boundary of the parameter space.
>>
>> If reparameterization, for example, works and the optimization
>> converges, then there is no problem, I think.
>
> Is the idea behind reparameterization to assume the constraint binds,
> substitute that equality for one of the parameters in the model and re-
> estimate?
No, what I have in mind is the case when we assume the constraint
might not be binding.
for example
beta>0: use exp/log transform beta = exp(params)
params is defined for the entire real line and can be used with the
optimizer, beta is used inside the function (likelihood function,
moment condition)
0<beta<1 one possibility logit transform, beta = 1/(1+exp(params)) or
something like this, bounds can be changed with extra arguments.
beta vector in unit simplex: similar to multinomial logit transform
for lag-polynomials, Skipper found a transform in the literature that
imposes stationarity (all roots outside unit circle)
and so on. But it might not be easy or possible with complicated constraints.
In these cases a binding constraint would mean that params goes off to
minus or plus infinity.
Josef
One useful contribution, given that you are working on this in R,
would be converting an example to a test case that we could use.
One reason GMM is in the sandbox and I stopped working on it was that
I didn't have any reference examples (that I was specifically
interested in at the time).
Josef