Confirmed...
Indeed, one just needs to ensure that the model's weights and inputs are of type Array{Fixed{...}} and the forward computation is correct.
I have a further question though, because it seems that I cannot train with Array{Fixed{...}} nor Array{Float32}. By which I mean to say that the my trainresults(), which comes from the tutorials, produces no improvements with those data types. It feels as if CPU training is busted.
If I disable GPU usage with
```
Knet.atype() = Array{Float32}
```
Then the same code produces the following error:
```
Stacktrace:
[1] (::Chain)(::Array{Float32,2}, ::Array{Float32,1}) at .\In[1]:24
[2] (::Knet.var"#693#694"{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}},Tuple{Array{Float32,2},Array{Float32,1}}})() at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:205
[3] differentiate(::Function; o::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:144
[4] differentiate at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:135 [inlined]
[5] iterate at ***\.julia\packages\Knet\Fpb6K\src\train.jl:23 [inlined]
[6] iterate(::Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at ***\.julia\packages\Knet\Fpb6K\src\progress.jl:70
[7] iterate at ***\.julia\packages\IterTools\0dYLc\src\IterTools.jl:82 [inlined]
[8] iterate at .\generator.jl:44 [inlined]
[9] iterate at .\iterators.jl:1056 [inlined]
[10] iterate at .\iterators.jl:1052 [inlined]
[11] grow_to!(::Array{Any,1}, ::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:726
[12] _collect at .\array.jl:639 [inlined]
[13] collect(::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:603
[14] trainresults(::Tuple{Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}, ::Chain, ::Nothing; lr::Float64, repeatD::Int64, optimiser::Function) at .\In[1]:33
[15] top-level scope at In[1]:61
[16] eval at .\boot.jl:331 [inlined]
[17] softscope_include_string(::Module, ::String, ::String) at ***\.julia\packages\SoftGlobalScope\u4UzH\src\SoftGlobalScope.jl:217
[18] execute_request(::ZMQ.Socket, ::IJulia.Msg) at ***\.julia\packages\IJulia\DrVMH\src\execute_request.jl:67
[19] #invokelatest#1 at .\essentials.jl:712 [inlined]
[20] invokelatest at .\essentials.jl:711 [inlined]
[21] eventloop(::ZMQ.Socket) at ***\.julia\packages\IJulia\DrVMH\src\eventloop.jl:8
[22] (::IJulia.var"#15#18")() at .\task.jl:358
MethodError: no method matching Array(::AutoGrad.Result{Array{Float32,2}})
Closest candidates are:
Array(!Matched::LinearAlgebra.SymTridiagonal) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\LinearAlgebra\src\tridiag.jl:111
Array(!Matched::LinearAlgebra.Tridiagonal) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\LinearAlgebra\src\tridiag.jl:528
Array(!Matched::LinearAlgebra.AbstractTriangular) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\LinearAlgebra\src\triangular.jl:162
...
Stacktrace:
[1] differentiate(::Function; o::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:148
[2] differentiate at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:135 [inlined]
[3] iterate at ***\.julia\packages\Knet\Fpb6K\src\train.jl:23 [inlined]
[4] iterate(::Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at ***\.julia\packages\Knet\Fpb6K\src\progress.jl:70
[5] iterate at ***\.julia\packages\IterTools\0dYLc\src\IterTools.jl:82 [inlined]
[6] iterate at .\generator.jl:44 [inlined]
[7] iterate at .\iterators.jl:1056 [inlined]
[8] iterate at .\iterators.jl:1052 [inlined]
[9] grow_to!(::Array{Any,1}, ::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:726
[10] _collect at .\array.jl:639 [inlined]
[11] collect(::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:603
[12] trainresults(::Tuple{Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}, ::Chain, ::Nothing; lr::Float64, repeatD::Int64, optimiser::Function) at .\In[1]:33
[13] top-level scope at In[1]:61
```
Attached is the minimum working example.
A parallel investigation into this led me to notice the following discrepancies between the @diff output for KnetArray{Float32} and Array{Fixed{...}}/Array{Float32} results:
```
# KnetArray{Float32}
collect(params(@diff model(dtrn)))
# --> 4-element Array{Param,1}: P(KnetArray{Float32,2}(2,2)) P(KnetArray{Float32,1}(2)) P(KnetArray{Float32,2}(1,2)) P(KnetArray{Float32,1}(1))
# Array{Fixed{}/Float32}
collect(params(@diff modelQ(dtrnQ)))
# --> 0-element Array{Param,1}
```
This seems to be a new occurrence, that CPU @diff and thus backpropagation and thus training are impaired.