Thanks, Josh. :D
For me the most amazing experience was coding the inverseNetwork function that takes a
neural network and trains a new one to be the inverse of the original.
It was really weird writing that because I kept thinking "this can't really work, or can it?"
I am deeply grateful to the mathjs folks. It is pretty incredible to work with code that does math.
For example, all the following is just cranked out. It's the gradient expression for a simple Kinann network.
Kinann then compiles all that into Javascript using new Function(), which I just started using a few weeks
ago for the first time. It's this kind of stuff that lends Javascript so much power over C++ or Java.
gradExpr {
"w1b0": "(2 * (w1b0 - yt0 + w1r0c0 * x0 + w1r0c1 * x1 + w1r0c2 * x2 + w1r0c3 * x3) + 0) / 2",
"w1b1": "(2 * (w1b1 - yt1 + w1r1c0 * x0 + w1r1c1 * x1 + w1r1c2 * x2 + w1r1c3 * x3) + 0) / 2",
"w1b2": "(2 * (w1b2 - yt2 + w1r2c0 * x0 + w1r2c1 * x1 + w1r2c2 * x2 + w1r2c3 * x3) + 0) / 2",
"w1b3": "(2 * (w1b3 - yt3 + w1r3c0 * x0 + w1r3c1 * x1 + w1r3c2 * x2 + w1r3c3 * x3) + 0) / 2",
"w1r0c0": "(2 * (x0 + 0) * (w1b0 - yt0 + w1r0c0 * x0 + w1r0c1 * x1 + w1r0c2 * x2 + w1r0c3 * x3) + 0) / 2",
"w1r0c1": "(2 * (x1 + 0) * (w1b0 - yt0 + w1r0c0 * x0 + w1r0c1 * x1 + w1r0c2 * x2 + w1r0c3 * x3) + 0) / 2",
"w1r0c2": "(2 * (x2 + 0) * (w1b0 - yt0 + w1r0c0 * x0 + w1r0c1 * x1 + w1r0c2 * x2 + w1r0c3 * x3) + 0) / 2",
"w1r0c3": "(2 * (x3 + 0) * (w1b0 - yt0 + w1r0c0 * x0 + w1r0c1 * x1 + w1r0c2 * x2 + w1r0c3 * x3) + 0) / 2",
"w1r1c0": "(2 * (x0 + 0) * (w1b1 - yt1 + w1r1c0 * x0 + w1r1c1 * x1 + w1r1c2 * x2 + w1r1c3 * x3) + 0) / 2",
"w1r1c1": "(2 * (x1 + 0) * (w1b1 - yt1 + w1r1c0 * x0 + w1r1c1 * x1 + w1r1c2 * x2 + w1r1c3 * x3) + 0) / 2",
"w1r1c2": "(2 * (x2 + 0) * (w1b1 - yt1 + w1r1c0 * x0 + w1r1c1 * x1 + w1r1c2 * x2 + w1r1c3 * x3) + 0) / 2",
"w1r1c3": "(2 * (x3 + 0) * (w1b1 - yt1 + w1r1c0 * x0 + w1r1c1 * x1 + w1r1c2 * x2 + w1r1c3 * x3) + 0) / 2",
"w1r2c0": "(2 * (x0 + 0) * (w1b2 - yt2 + w1r2c0 * x0 + w1r2c1 * x1 + w1r2c2 * x2 + w1r2c3 * x3) + 0) / 2",
"w1r2c1": "(2 * (x1 + 0) * (w1b2 - yt2 + w1r2c0 * x0 + w1r2c1 * x1 + w1r2c2 * x2 + w1r2c3 * x3) + 0) / 2",
"w1r2c2": "(2 * (x2 + 0) * (w1b2 - yt2 + w1r2c0 * x0 + w1r2c1 * x1 + w1r2c2 * x2 + w1r2c3 * x3) + 0) / 2",
"w1r2c3": "(2 * (x3 + 0) * (w1b2 - yt2 + w1r2c0 * x0 + w1r2c1 * x1 + w1r2c2 * x2 + w1r2c3 * x3) + 0) / 2",
"w1r3c0": "(2 * (x0 + 0) * (w1b3 - yt3 + w1r3c0 * x0 + w1r3c1 * x1 + w1r3c2 * x2 + w1r3c3 * x3) + 0) / 2",
"w1r3c1": "(2 * (x1 + 0) * (w1b3 - yt3 + w1r3c0 * x0 + w1r3c1 * x1 + w1r3c2 * x2 + w1r3c3 * x3) + 0) / 2",
"w1r3c2": "(2 * (x2 + 0) * (w1b3 - yt3 + w1r3c0 * x0 + w1r3c1 * x1 + w1r3c2 * x2 + w1r3c3 * x3) + 0) / 2",
"w1r3c3": "(2 * (x3 + 0) * (w1b3 - yt3 + w1r3c0 * x0 + w1r3c1 * x1 + w1r3c2 * x2 + w1r3c3 * x3) + 0) / 2"
}
Kinann will really help with calibration. Now the challenge is to take the measurements that Kinann needs for its magic.
A Cartesian robot needs at least 9 data points to calculate linear error. FPD would probably require a quadratic error
model for calibration. A quadratic model will require at least 27 points. That is a challenge, but it is still simpler than
the DeltaMesh code I wrote last year. For now I'm confining myself to really simple Cartesian FirePaste robot just
to work out the software architecture for autocalibration. I'll keep y'all posted as I dig in deeper.