Hello Group,
I've recently explored changing the dimensionality of my embeddings
from a few dozen to several thousand. But I ran into a compile time
issue with ceres::AutoDiffCostFunction.
What I had previously done was already discussed on the list, for each
training example I do:
ceres::CostFunction *CostFunctor = new ceres::AutoDiffCostFunction
<
TripletCostFunctor,
1, /* One residual per cost functor */
Dimensionality /* Size of parameter block */
>(new TripletCostFunctor(*this, single_training_example));
CeresProblem.AddResidualBlock(
CostFunctor,
nullptr,
StartingWeights);
The only thing I have changed is an increase in the Dimensionality
template parameter to now be in the several thousand. But this is
causing GCC to choke:
/usr/include/ceres/internal/autodiff.h:192:63: fatal error: template instantiation depth exceeds maximum of 900 (use '-ftemplate-depth=' to increase the maximum)
192 | Make1stOrderPerturbation<j + 1, N, Offset, T, JetT>::Apply(src, dst);
If I bump the -ftemplate-depth to 3000, it compiles fine. But that's
well beyond what ANSI/ISO C++ requires of 1024. Would you recommend I
increase -ftemplate-depth or something else?
I feel like this is an inelegant solution. But then I suspect many ML
applications deal with vectors of large dimensionality, so I figured I
would ask an expert.
Yours truly,
--
Kip Warner
OpenPGP signed/encrypted mail preferred
https://www.thevertigo.com