Re: [Ros-sig-mm] mm revival, next steps

33 views
Skip to first unread message

Daniel Stonier

unread,
Apr 24, 2012, 3:39:29 AM4/24/12
to ros-s...@googlegroups.com

On 19 April 2012 22:45, Jeff Rousseau <jrou...@aptima.com> <jrou...@aptima.com> wrote:
Hey Folks,

I'd like to revive this discussion, starting with what I believe are some next steps.  Unfortunately due to forces out of my control, I haven't had enough dev time to devote to this SIG for the Fuerte development cycle.  I'm taking aim at Groovy now.  I believe there is clear interest in this topic and that the ROS community would benefit from a common multi-robot comms solution (please chime in if you disagree).


Chiming in, but only to agree.
 
I'm currently working on a PGM ROS transport solution for ROS (C++) based off of OpenPGM which may help with some of the network issues outlined in the prior transport discussion thread--however, I'm currently leaving QoS and message priority off the table for the sake of keeping it simple (at least on a first-pass).  I'm also experimenting with the existing ROSUDP transport in a single-message-in-datagram as an alternative.  Assuming a "good enough" transport solution is reached; the next step is finding a way to share topic information across machines.

 
I do not have the time to delve this deeply into the transport protocols, but still interested in following the progress of such efforts.
 
A quick survey of currently-available multi-master topic-sharing solutions run the gamut from custom ad-hoc & mesh network solutions (using OLSR, BATMAN or RT-WMP for example) to standard infrastructure-mode wifi with mDNS discovery.  Here's a shortlist of some solutions:

ros-rt-wmp
wifi_comm
foreign_relay
multimaster_experimental/rosproxy
rosmaster_sd (not touched since 2010?)
batman_mesh_info (deprecated)

I believe our solution should support commodity 802.11 hardware with stock NIC drivers running an IP network (at least as a starting point).  

I strongly concur here too. Ad-hoc and mesh has some interesting use cases, especially in research, but starting with what is available is the fastest way to get to some practical solutions.
 
Any one of the above solutions can be used as a starting point to get something going--I particularly like the idea of a having a "public" master (I think the idea came from ROS Building manager) that only exposes a particular set of topics from the internal master.

As for master discovery, multicast DNS seems like the general solution that folks are adopting.  Anyone think adopting something like Bonjour for discovery is a bad idea? Are there alternatives? I've been using a custom mDNS/Bonjour solution successfully in my lab for several years now.
 
We've been using zeroconf mdns/dns-sd with avahi (linux) and jmdns (android) for a while now also. Having reasonably consistent 'ros' interfaces for configuring and setting these alongside a Bonjour implementation would be great. 

What kind of custom mDNS solution did you create?

Daniel.

Jeff Rousseau <jrousseau@aptima.com>

unread,
Apr 24, 2012, 7:55:24 AM4/24/12
to ros-s...@googlegroups.com

I currently have a simple node that uses Avahi to broadcast a named master with a specified port (so you can run many discoverable masters on one machine).  However, I may soon be forced to move to Bonjour due to a customer requirement.  As long as our service/proto strings match, it doesn’t really matter what mDNS solution we use as long as it’s largely Zeroconf compatible.

 

For our service/protocol string we just use “_ros._tcp” (obviously for TCP transport)

 

What kind of “interface” did you have in mind? A ROS service & latched topic for master availability events perhaps?

 

From: ros-s...@googlegroups.com [mailto:ros-s...@googlegroups.com] On Behalf Of Daniel Stonier
Sent: Tuesday, April 24, 2012 3:39 AM
To: ros-s...@googlegroups.com
Subject: [ros-sig-mm] Re: [Ros-sig-mm] mm revival, next steps

 

We've been using zeroconf mdns/dns-sd with avahi (linux) and jmdns (android) for a while now also. Having reasonably consistent 'ros' interfaces for configuring and setting these alongside a Bonjour implementation would be great. 

 

What kind of custom mDNS solution did you create?

 


The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.

Ken Conley

unread,
Apr 26, 2012, 4:49:39 PM4/26/12
to ros-s...@googlegroups.com
It would be good to get Dirk in on this discussion, but he's
out-of-town right now. A lot of the PGM/etc... stuff is relevant to
the efforts he's leading around a next-generation "ROS 2.0"
middleware.

- Ken

Daniel Stonier

unread,
Apr 29, 2012, 1:44:33 AM4/29/12
to ros-s...@googlegroups.com
On 24 April 2012 20:55, Jeff Rousseau <jrou...@aptima.com> <jrou...@aptima.com> wrote:

I currently have a simple node that uses Avahi to broadcast a named master with a specified port (so you can run many discoverable masters on one machine).  However, I may soon be forced to move to Bonjour due to a customer requirement.  As long as our service/proto strings match, it doesn’t really matter what mDNS solution we use as long as it’s largely Zeroconf compatible.


I agree. I've been using _ros-master._tcp, _ros-master._tcp and _app-manager._tcp (advertising a variant of willow's multimaster style app manager instead of the ros master). I do think zeroconf implementations would benefit from a consistent ros api (pub/subs) though. That way one could move a suite of programs watching/reacting to a zeroconf node's list of services from linux (avahi) to apple (avahi) to java-based (jmdns) without a code change. 
 

 

For our service/protocol string we just use “_ros._tcp” (obviously for TCP transport)

 

What kind of “interface” did you have in mind? A ROS service & latched topic for master availability events perhaps?


Yes, very similar. I wanted to be able to notify when services appeared and disappeared rather than just resolving on the expectation that it was going to be there 100%. Both avahi and jmdns provide hooks for that (though that level of jmdns is still rather experimental). 

End result, I have both a c++ library and a node interface for doing that kind of thing (zeroconf_implementations) and the jmdns interface just has a jar file which wraps the same functionality rather more simply (zeroconf_android). These are still very early implementations and I'm happy to iterate on what's there.

Do you have some code up and about?

Daniel.

Jeff Rousseau <jrousseau@aptima.com>

unread,
May 1, 2012, 10:38:14 AM5/1/12
to ros-s...@googlegroups.com

The avahi/dbus node I have is rather primitive—the source is currently unpublished (I’d have to go through contracts/legal to publish it as BSD).  All it does though is read two ros params for “name” and “port” and add a service a la (python):

 

in = dbus.Interface(bus.get_object(avahi.DBUS_NAME,

                                   server.EntryGroupNew()),

                    avahi.DBUS_INTERFACE_ENTRY_GROUP)

 

in.AddService(avahi.IF_UNSPEC, avahi.PROTO_UNSPEC,dbus.UInt32(0),

                     self.robot_name, "_ros._tcp", "", "",

                     dbus.UInt16(self.port), "")

 

My aim was for “public” masters to discover each other, using the inherent ability to query a list of topics/services using the XMLRPC API that already exists.

 

So it looks like our approaches differ in that mine aims for discovery at the master level, while the zeroconf_implementation targets the node level (judging by a quick look in ZeroconfNode.cpp).  I’d like to hear arguments for putting the node developer in charge of how their node should be exposed on the network—it feels like a possible maintenance nightmare if I had to (re)configure every node and what topics/services it exposed on my network.  It seems like a simple master-level white list would suffice. 

Daniel Stonier

unread,
May 2, 2012, 3:55:15 AM5/2/12
to ros-s...@googlegroups.com
On 1 May 2012 23:38, Jeff Rousseau <jrou...@aptima.com> <jrou...@aptima.com> wrote:

The avahi/dbus node I have is rather primitive—the source is currently unpublished (I’d have to go through contracts/legal to publish it as BSD).  All it does though is read two ros params for “name” and “port” and add a service a la (python):

 

in = dbus.Interface(bus.get_object(avahi.DBUS_NAME,

                                   server.EntryGroupNew()),

                    avahi.DBUS_INTERFACE_ENTRY_GROUP)

 

in.AddService(avahi.IF_UNSPEC, avahi.PROTO_UNSPEC,dbus.UInt32(0),

                     self.robot_name, "_ros._tcp", "", "",

                     dbus.UInt16(self.port), "")

 

My aim was for “public” masters to discover each other, using the inherent ability to query a list of topics/services using the XMLRPC API that already exists.

 

So it looks like our approaches differ in that mine aims for discovery at the master level, while the zeroconf_implementation targets the node level (judging by a quick look in ZeroconfNode.cpp).  I’d like to hear arguments for putting the node developer in charge of how their node should be exposed on the network—it feels like a possible maintenance nightmare if I had to (re)configure every node and what topics/services it exposed on my network.  It seems like a simple master-level white list would suffice.  


Jeff, are you on the same page I am? There is no ZeroconfNode.cpp and there is nothing in the zeroconf_avahi package targeting the node level or even related to exposing a node's pubsubs and services. As you say, that is a configuration nightmare unless it's something built into the system (something I think DARC is aiming at).

There is the ZeroconfNode class, but the only reason that has been labelled with a 'Node' reference is because it runs as a standalone node. It is only responsible for publishing and discovering zeroconf services and doesn't know anything else about the rest of the system it is running with. The api I thought should be fairly clear on that.

We use it to publish a ros master, exactly as you do. We also use it to discover a 'building manager' master that can be used with the multimaster app manager. And lastly, we also use it to advertise the multimaster app manager so that the building master can invite it to a multi-robot system. The app manager is responsible for what is publicly exposed by a robot, not the zeroconf node. In addition, it also sets up callbacks/ros pubsubs to keep track of zeroconf services as they come online or offline (or out of wireless range).

We're talking about the same thing, zeroconf_avahi just provides a few extra zeroconf related bells and whistles.
 
The tutorials should provide you with a clear idea of what it is doing.

Daniel.

 

From: ros-s...@googlegroups.com [mailto:ros-s...@googlegroups.com] On Behalf Of Daniel Stonier
Sent: Sunday, April 29, 2012 1:45 AM
To: ros-s...@googlegroups.com
Subject: Re: [ros-sig-mm] Re: [Ros-sig-mm] mm revival, next steps

 

 

On 24 April 2012 20:55, Jeff Rousseau <jrou...@aptima.com> <jrou...@aptima.com> wrote:

I currently have a simple node that uses Avahi to broadcast a named master with a specified port (so you can run many discoverable masters on one machine).  However, I may soon be forced to move to Bonjour due to a customer requirement.  As long as our service/proto strings match, it doesn’t really matter what mDNS solution we use as long as it’s largely Zeroconf compatible.

 

I agree. I've been using _ros-master._tcp, _ros-master._tcp and _app-manager._tcp (advertising a variant of willow's multimaster style app manager instead of the ros master). I do think zeroconf implementations would benefit from a consistent ros api (pub/subs) though. That way one could move a suite of programs watching/reacting to a zeroconf node's list of services from linux (avahi) to apple (avahi) to java-based (jmdns) without a code change. 

 

 

For our service/protocol string we just use “_ros._tcp” (obviously for TCP transport)

 

What kind of “interface” did you have in mind? A ROS service & latched topic for master availability events perhaps?

 

Yes, very similar. I wanted to be able to notify when services appeared and disappeared rather than just resolving on the expectation that it was going to be there 100%. Both avahi and jmdns provide hooks for that (though that level of jmdns is still rather experimental). 

 

End result, I have both a c++ library and a node interface for doing that kind of thing (zeroconf_implementations) and the jmdns interface just has a jar file which wraps the same functionality rather more simply (zeroconf_android). These are still very early implementations and I'm happy to iterate on what's there.

 

Do you have some code up and about?

 

Daniel.

 

 

From: ros-s...@googlegroups.com [mailto:ros-s...@googlegroups.com] On Behalf Of Daniel Stonier
Sent: Tuesday, April 24, 2012 3:39 AM
To: ros-s...@googlegroups.com
Subject: [ros-sig-mm] Re: [Ros-sig-mm] mm revival, next steps

 

We've been using zeroconf mdns/dns-sd with avahi (linux) and jmdns (android) for a while now also. Having reasonably consistent 'ros' interfaces for configuring and setting these alongside a Bonjour implementation would be great. 

 

What kind of custom mDNS solution did you create?

 

 

The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.



 

 


The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.

Jeff Rousseau

unread,
May 2, 2012, 7:01:40 AM5/2/12
to ros-s...@googlegroups.com
Ah, simple misunderstanding.  I was browsing the source in the repo instead of reading the tutorials. Sorry for the confusion.  I had found the ZeroConfNode source and thought it was an example of how to write a zeroconf-enabled node.  I'll run through the tutorials

Daniel Stonier

unread,
May 2, 2012, 9:43:03 AM5/2/12
to ros-s...@googlegroups.com
On 2 May 2012 20:01, Jeff Rousseau <jrou...@cs.uml.edu> wrote:
Ah, simple misunderstanding.  I was browsing the source in the repo instead of reading the tutorials. Sorry for the confusion.  I had found the ZeroConfNode source and thought it was an example of how to write a zeroconf-enabled node.  I'll run through the tutorials


 
No worries.

Daniel.
Reply all
Reply to author
Forward
0 new messages