DATABASE STREAMING to jAER using UDP

49 views
Skip to first unread message

mephisto

unread,
Apr 3, 2019, 12:53:19 PM4/3/19
to jaer-users
Hello, I am quite new to this field.
I am trying to send  one of the .rosbag file dataset that can be found here http://rpg.ifi.uzh.ch/davis_data.html  containing events, from a server in streaming through a UDP socket to a client within the same LAN (two Ethernet cables connected to a switch ) and to visualize on the client the events through jAER in realtime (on the client file->remote->enable unicast udp input), I get a lot of packet losses and so in the viewer I cannot distinguish the shape of the objects, I actually see really few events. I want somehow to measure latency and other metrics and adjust parameters so to have a better result, do you have some hints on how to do that , I have tried to change the buffer size at the client size and transmitter side but it does not seems that the situation improve particularly.
Thanks a lot

Tobi Delbruck (INI)

unread,
Apr 3, 2019, 2:05:35 PM4/3/19
to jaer-...@googlegroups.com
Interesting. Did you try streaming on the same computer? When I do this
I can achieve nearly zero packet loss using jaer to jaer transfer.

Do you see many dropped packets from the packet counter debug output?

BTW, why are you trying to visualize in jaer the rosbag output? You can
play (albeit not so well) the rosbag directly in jaer now.


mephisto

unread,
Apr 4, 2019, 1:43:12 PM4/4/19
to jaer-users
Hi Mr. Tobi,
thanks a lot for your answer and sorry for  the delay, I took some time to make the experiments.
Actually I was doing  a mistake with my previous experiment : I was just opening a UDP socket through Python on machine A in order to stream the database  file to machine B which was running jAER. The file is send in datagrams by UDP and there are a lot of packets losses in this case because the streaming at the server side is not performed according to events generation time, thus  I was not emulate at all the real time generation of events, so actually I was just sending a large file through UDP. 
As suggested,  I have installed jAER also in the server machine, and I have performed the streaming jAER to jAER .

Yes if I run server/client jAER locally I do not get packet losses, but the scenario that I am considering involves Mobile Edge Computing. So the idea is to have the database on a machine A and to stream in real time the events to a machine B in order to perform some actions (e.g image recognition, tracking,  augmented reality) that trigger a feedback, and this seams possible if I use a jAER to jAER streaming approach, so I will perform some tests in order to evaluate packet losses, latency in order to understand which are the network requirements.

I would like to ask if there is somehow a  minimum amount, order of magnitude, of events, required to perform a simple object recognition/detection, as I said I am completely new to this field,  this can allow me to understand for example  which is the minimum amount of events that I have to deliver correctly  or the amount of redundancy that I should add to my network so to guarantee the proper functioning.
Thanks a lot

Tobi Delbruck (INI)

unread,
Apr 5, 2019, 8:54:58 AM4/5/19
to jaer-...@googlegroups.com

I don't really understand why you want to stream a dataset from one jaer to another. Jaer is aimed at basic algorithm development and for testing the event cameras. It is not aimed at embedded IoT applications.

Regarding the number of events needed to recognize things, it depends on the application. For RoShamBo to recognize the hand symbols as few as 500 or 1000 DVS constant-count frames works quite reliably.

For the predator prey robot application, we used DVS constant-count frames of 5000 events.  But we could go a bit lower and still make reliable steering classifications.

For the DDD17 driving experiments, we typically used DVS constant-time frames that had a wide range of event counts, typically 10k or 50k events (I don't remember the details)


Lungu, I.-A., Corradi, F., and Delbruck, T. (2017). Live Demonstration: Convolutional Neural Network Driven by Dynamic Vision Sensor Playing RoShamBo. in 2017 IEEE Symposium on Circuits and Systems (ISCAS 2017) (Baltimore, MD, USA). Available at: https://drive.google.com/file/d/0BzvXOhBHjRheYjNWZGYtNFpVRkU/view?usp=sharing.
Moeys, D. P., Corradi, F., Kerr, E., Vance, P., Das, G., Neil, D., et al. (2016). Steering a predator robot using a mixed frame/event-driven convolutional neural network. in 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP), 1–8. doi:10.1109/EBCCSP.2016.7605233.
Moeys, Diederik Paul, Neil, D., Corradi, F., Kerr, E., Vance, P., Das, G., et al. (2018). PRED18: Dataset and further experiments with DAVIS event camera in predator-prey robot chasing. in EBCCSP 2018, (submitted). Available at: https://www.dropbox.com/s/yn7sbqe8my9mqse/MoeysEBCCSP2018.pdf?dl=0.

Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2017). DDD17: End-To-End DAVIS Driving Dataset. in ICML’17 Workshop on Machine Learning for Autonomous Vehicles (MLAV 2017) (Sydney, Australia). Available at: https://openreview.net/forum?id=HkehpKVG-&noteId=HkehpKVG-.


mephisto

unread,
Apr 8, 2019, 5:15:06 AM4/8/19
to jaer-users
Hi Tobi,
thanks a lot for your answer and for the references.

Yes I understand the aim of jAER and I know that it may seams strange, I am trying to investigate some scenarios in which the machine that hosts the database has not enough computation resources to performs image recognition/tracking ... or for example a situation in which I have both a standard camera and an event camera and I can correlate frames and events in a remote machine with high computational capacity.

I am following these works that can  better clarify my intentions

Gehrig, Daniel & Rebecq, Henri & Gallego, Guillermo & Scaramuzza, Davide. (2018). Asynchronous, Photometric Feature Tracking using Events and Frames.
available at : https://arxiv.org/abs/1807.09713

Luyang Liu, Hongyu Li, and Marco Gruteser. 2019. Edge AssistedReal-time Object Detection for Mobile Augmented Reality.

Zhang, Wuyang & Li, Sugang & Liu, Luyang & Jia, Zhenhua & Zhang, Yanyong & Raychaudhuri, Dipankar. (2019). Hetero-Edge: Orchestration of Real-time Vision Applications on Heterogeneous Edge Clouds.

Thanks a lot for your help and advices
Reply all
Reply to author
Forward
0 new messages