I am working on a new project (https://github.com/RobotWebTools/webrtc_ros) that allows for the streaming of video and audio to a web browser (or other application) using WebRTC (http://www.webrtc.org/). While I have used image_transport a lot and have successfully integrated the streaming of video topics, I have never used any of the audio related ROS packages. While I can directly communicate with the audio hardware on the platform that a node is running on I would prefer to instead publish over a ROS topic. Specifically I am looking for more information on how audio is sent using audio_common. I know that it uses audio_common_msgs, but there is not a lot of information on how it is encoded. There does not appear to be a lot of information on using the audio messages except for the source code. Does anyone have experience in with transporting audio using these messages? I'm also curious if anyone has looked into synchronizing audio with video over ROS topics. If anyone has used audio with WebRTC before let me know as it is proving to be more complicated to implement a custom audio source than a video source.--Mitchell
--
You received this message because you are subscribed to the Google Groups "ROS Perception Special Interest Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-percept...@googlegroups.com.
To post to this group, send email to ros-sig-p...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ros-sig-perception/ec72ec5f-9dd8-4a95-bf69-fad39204c655%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/ros-sig-perception/CABKcRosLCOA6ekALJcCp8dvHzo6x2c6TfYHL_2LM37i9U%3D6F0w%40mail.gmail.com.