![]() |
ildoonet
September 11 |
I recently implement couple of algorithms for Deep Learning, which provides fast inference on an embedded devices.
One of them is called ‘LCNN’, you can find the original paper here : https://arxiv.org/abs/1611.06473
And I implemented it with tensorflow,
tf-lcnn - Tensorflow implementation for 'LCNN: Lookup-based Convolutional Neural Network'. Predict Faster using Models Trained Fast with Multi-GPUs
This codes compress alexnet which takes roughly 150ms or more on a single core cpu,
to a sparse convolutional layered network which takes 10~50ms on the same environment.
https://github.com/ildoonet/tf-lcnn/raw/master/images/timeline_alexnet.pngSo… based on these new technologies, i am looking for an idea to try.
Like openpose on a robot : Human Pose Estimation Deep Learning Model (OpenPose) ROS Package
Any Thoughts?
Visit Topic or reply to this email to respond.
To unsubscribe from these emails, click here.
![]() |
ildoonet
September 14 |
It’s for snapdragon cpus like smartphones.
Also I’m working on arm cpus.
Also, this codes is not optimized for specific cpu architecture, It can be improved further.
Visit Topic or reply to this email to respond.
![]() |
davecrawley
September 19 |
That’s really cool!
Do you have trained neural nets to go along with it?
![]() |
ildoonet
September 19 |
I have alexnet which 20 times faster than the original one. I will add more models. Do you have any particular model you want?
Visit Topic or reply to this email to respond.