Alternative Number Formats (eg Quarter Float-Precision or Fixed Point)

10 views
Skip to first unread message

Ross Andrew Donnachie

unread,
Aug 6, 2020, 4:56:05 AM8/6/20
to knet-users
Good day,

I am looking to transition my CNN to a different number format. Before I begin my endeavour I thought to post this in case someone has pointers and, if not, to consolidate my findings on a shared platform.

There is no mention of 16bit, let alone FixedPointNumbers, in the forum nor in the repository nor documentation. I anticipate that the KnetArrays will not support any number formats other than 32/64 bit floats, but that regular Array execution (on the CPU) shouldn't have any issues.

Kind Regards,
Ross

Deniz Yuret

unread,
Aug 7, 2020, 12:01:49 PM8/7/20
to Ross Andrew Donnachie, knet-users
That is currently the situation AFAIK.

--
You received this message because you are subscribed to the Google Groups "knet-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to knet-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/knet-users/91002f8f-b0c4-4488-894c-036a45b3ab8ao%40googlegroups.com.

radon...@gmail.com

unread,
Aug 15, 2020, 6:26:02 AM8/15/20
to knet-users
Confirmed...

Indeed, one just needs to ensure that the model's weights and inputs are of type Array{Fixed{...}} and the forward computation is correct.

I have a further question though, because it seems that I cannot train with Array{Fixed{...}} nor Array{Float32}. By which I mean to say that the my trainresults(), which comes from the tutorials, produces no improvements with those data types. It feels as if CPU training is busted.

If I disable GPU usage with
```
Knet.atype() = Array{Float32}
```
Then the same code produces the following error:

```
Stacktrace:
 [1] (::Chain)(::Array{Float32,2}, ::Array{Float32,1}) at .\In[1]:24
 [2] (::Knet.var"#693#694"{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}},Tuple{Array{Float32,2},Array{Float32,1}}})() at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:205
 [3] differentiate(::Function; o::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:144
 [4] differentiate at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:135 [inlined]
 [5] iterate at ***\.julia\packages\Knet\Fpb6K\src\train.jl:23 [inlined]
 [6] iterate(::Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at ***\.julia\packages\Knet\Fpb6K\src\progress.jl:70
 [7] iterate at ***\.julia\packages\IterTools\0dYLc\src\IterTools.jl:82 [inlined]
 [8] iterate at .\generator.jl:44 [inlined]
 [9] iterate at .\iterators.jl:1056 [inlined]
 [10] iterate at .\iterators.jl:1052 [inlined]
 [11] grow_to!(::Array{Any,1}, ::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:726
 [12] _collect at .\array.jl:639 [inlined]
 [13] collect(::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:603
 [14] trainresults(::Tuple{Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}, ::Chain, ::Nothing; lr::Float64, repeatD::Int64, optimiser::Function) at .\In[1]:33
 [15] top-level scope at In[1]:61
 [16] eval at .\boot.jl:331 [inlined]
 [17] softscope_include_string(::Module, ::String, ::String) at ***\.julia\packages\SoftGlobalScope\u4UzH\src\SoftGlobalScope.jl:217
 [18] execute_request(::ZMQ.Socket, ::IJulia.Msg) at ***\.julia\packages\IJulia\DrVMH\src\execute_request.jl:67
 [19] #invokelatest#1 at .\essentials.jl:712 [inlined]
 [20] invokelatest at .\essentials.jl:711 [inlined]
 [21] eventloop(::ZMQ.Socket) at ***\.julia\packages\IJulia\DrVMH\src\eventloop.jl:8
 [22] (::IJulia.var"#15#18")() at .\task.jl:358
MethodError: no method matching Array(::AutoGrad.Result{Array{Float32,2}})
Closest candidates are:
  Array(!Matched::LinearAlgebra.SymTridiagonal) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\LinearAlgebra\src\tridiag.jl:111
  Array(!Matched::LinearAlgebra.Tridiagonal) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\LinearAlgebra\src\tridiag.jl:528
  Array(!Matched::LinearAlgebra.AbstractTriangular) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\LinearAlgebra\src\triangular.jl:162
  ...

Stacktrace:
 [1] differentiate(::Function; o::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:148
 [2] differentiate at ***\.julia\packages\AutoGrad\VFrAv\src\core.jl:135 [inlined]
 [3] iterate at ***\.julia\packages\Knet\Fpb6K\src\train.jl:23 [inlined]
 [4] iterate(::Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at ***\.julia\packages\Knet\Fpb6K\src\progress.jl:70
 [5] iterate at ***\.julia\packages\IterTools\0dYLc\src\IterTools.jl:82 [inlined]
 [6] iterate at .\generator.jl:44 [inlined]
 [7] iterate at .\iterators.jl:1056 [inlined]
 [8] iterate at .\iterators.jl:1052 [inlined]
 [9] grow_to!(::Array{Any,1}, ::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:726
 [10] _collect at .\array.jl:639 [inlined]
 [11] collect(::Base.Iterators.Flatten{Base.Generator{IterTools.TakeNth{Knet.Progress{Knet.Minimize{IterTools.NCycle{Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}},var"#7#8"{Chain,Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}}}) at .\array.jl:603
 [12] trainresults(::Tuple{Data{Tuple{Array{Float32,2},Array{Float32,1}}},Data{Tuple{Array{Float32,2},Array{Float32,1}}}}, ::Chain, ::Nothing; lr::Float64, repeatD::Int64, optimiser::Function) at .\In[1]:33
 [13] top-level scope at In[1]:61
```

Attached is the minimum working example.

A parallel investigation into this led me to notice the following discrepancies between the @diff output for KnetArray{Float32} and Array{Fixed{...}}/Array{Float32} results:

```
# KnetArray{Float32}
collect(params(@diff model(dtrn)))
# --> 4-element Array{Param,1}: P(KnetArray{Float32,2}(2,2)) P(KnetArray{Float32,1}(2)) P(KnetArray{Float32,2}(1,2)) P(KnetArray{Float32,1}(1))

# Array{Fixed{}/Float32}  
collect(params(@diff modelQ(dtrnQ)))
# --> 0-element Array{Param,1}
```

This seems to be a new occurrence, that CPU @diff and thus backpropagation and thus training are impaired.

radon...@gmail.com

unread,
Aug 16, 2020, 9:05:22 AM8/16/20
to knet-users
I posted a GitHub issue, that encapsulates the above problem.
Reply all
Reply to author
Forward
0 new messages