
--
You received this message because you are subscribed to the Google Groups "ROS SIG NG ROS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-ng-ro...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Adolfo,we don't have any news on a life cycle implementation in the ROS 2 prototype yet.Hopefully there will be news in a month or two on this topic.
we don't have any news on a life cycle implementation in the ROS 2 prototype yet.Hopefully there will be news in a month or two on this topic.
A component-based interface for nodes in general (with or without a life cycle) will allow us to either run a component in a separate process or load multiple of them dynamically in process and run them together.In ROS 1 this is a programming time decision since nodes and nodelets use different API.In ROS 2 that will be a deploy time decision - likely even with a way to change the configuration at run time.The additional life cycle will allow tools like roslaunch to switch from the initialization state to running once all components have finished their init phase.It also allows to easily suspend all component in a process, unload one of them, load a different one, initialize it and then resume operation.
These kind of operations will be available as a service interface on the process to enable orchestration from the outside.
But these are currently only the ideas / concepts without being implemented in the ROS 2 prototype yet.As far as I understand your example about node composition is what nodelets already provide in ROS 1 (minus a life cycle).
That will definitely be possible in ROS 2 too.In ROS 1 it is also possible to add new components to a process and remove existing ones.ROS 2 will likely improve flexibility to allow more ways to remap topics within the process and also keep the ROS graph within the process introspectable.
regarding the transport choices:(1) DDS uses sockets to communicate independent of the process layout(2) some DDS vendors provide the option to use shared memory to communicate between endpoints which are on the same system but that still requires serialization / deserialization(3) ROS will provide an optimized intra-process communication, so if the endpoints are in the same process they will only exchange references (like nodelets) which does not require any serialization / deserializationChoosing between (1) and (2) is currently only configurable through the vendor specific configuration.The default will likely be to choose (3) whenever possible since it will be the fastest approach.
I would assume that it will be possible to use the service API of components to orchestrate them from the outside "at will".So implementing a computational graph for control should be possible.We also think that it can e.g. be used to implement "synchronous" pipelines like with ecto in ROS 1.
Hi,
Regarding the life-cycle: The OROCOS component life-cycle might be worth a look, it’s pretty simple but covers all the bases in one, common life-cycle.
There are also component models with hierarchical life-cycles (e.g., in robotics, SmartSoft comes to mind), but I believe that’s not necessary. If anybody wants hierarchical life-cycles, I’d be happy to provide more rationale against ;-)
Regarding intra-process data passing, I would only like to add that lock-free data passing is not primarily for improving performance. In fact, it can make performance worse in some cases. The real advantage is that they avoid a scheduling point in the kernel that could lead to context switches or (worse) priority inversion. So, this is primarily important when we’re talking real-time guarantees. Btw, OROCOS has implementations of such data structures that we might be able to re-use (not sure about licensing, but otherwise I see no issues).
Mit freundlichen Grüßen / Best regards
Ingo Luetkebohle
Software Design and Analysis (CR/AEA2)
Tel. +49(711)811-12248
Fax +49(711)811-0
Ingo.Lue...@de.bosch.com
Currently we are not doing any of the locking, as intra-process comms between nodes will be using the middleware to asynchronously communicate the addresses of shared pointers. This allows us to more easily mimic the QoS settings of the interprocess topic. At least that's the leading idea for how to implement it. So in that case locking and thread communication will be done with the middleware. Both OpenSplice and RTI have documents which detail their threading model and how to configure them in different ways. I haven't looked to see if they let you implement your own locking strategies. Here's OpenSplice's "deployment" manual:In that document they talk about deployment configurations, including shared memory vs socket comms and internal threading models.
Another strategy would be to do the intra-process comms through our own custom queueing and synchronization, in which case the locking would be configurable by you implementing your own Executor class. There's no guarantee after all that you're even using an executor with more than one thread, in which case no locking would be needed. Our goal is that all work done by our code is done in threads created by the Executor, which you can override and control if you like. The middleware will have it's own threads, for reading sockets and the like, but the vendors are pretty careful to describe those threads and how they work. As you can imagine their customers care about that stuff, so their documentation on the subject is decent.
So far for intra-process comms we've only done prototypes and proofs of concepts. However, I'm starting work on this issue just now, so there will be more concrete details in the next few months.
--
You received this message because you are subscribed to the Google Groups "ROS SIG NG ROS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-ng-ro...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
What is (are) the main driver(s) behind rolling your own intra-process comms?.- Not all DDS vendors support shared memory, and you want to remain vendor-agnostic.- Not having to pull in DDS dependencies in minimal-footprint deployments not requiring it.
Hi Adolfo,What is (are) the main driver(s) behind rolling your own intra-process comms?.- Not all DDS vendors support shared memory, and you want to remain vendor-agnostic.- Not having to pull in DDS dependencies in minimal-footprint deployments not requiring it.Shared memory is not the same as the intra-process we are developing (which will pass references like nodelets do).In the case of shared memory you still need to serialize and deserialize the messages.
When passing reference that is not necessary at all.- Dirk
--
You received this message because you are subscribed to the Google Groups "ROS SIG NG ROS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-ng-ro...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
from a read of the OpenSlice deployment doc, they discuss their threads. I do not see any mention really of the application itself and how it should run. of course that could be due to bedtime being an hour ago. long story short, I do not see any restrictions which OpenSplice would place on an intra-process message passing layer.i agree that nodelets + transparent intra-process messaging should allow composition as described in the OP.
i do not understand this statement from adolfo: "Non-synchronous control-oriented applications might be interested in implementing (as an extension) lock-free synchronization primitives to ensure forward progress of the control threads."lock-free does not guarantee forward progress of control threads vs message queueing/passing, the design of the control logic within those threads does combined with proper runtime analysis/profiling and testing.
Implementing lock-free data structures is non-trivial,
and cases where lock-free provides an actual runtime performance improvement over std::mutex are extremely rare.
Adolfo, maybe an example use case is needed? maybe you have encountered one of these rare cases.
Hi,
Regarding the life-cycle: The OROCOS component life-cycle might be worth a look, it’s pretty simple but covers all the bases in one, common life-cycle.
There are also component models with hierarchical life-cycles (e.g., in robotics, SmartSoft comes to mind), but I believe that’s not necessary. If anybody wants hierarchical life-cycles, I’d be happy to provide more rationale against ;-)
Regarding intra-process data passing, I would only like to add that lock-free data passing is not primarily for improving performance. In fact, it can make performance worse in some cases. The real advantage is that they avoid a scheduling point in the kernel that could lead to context switches or (worse) priority inversion. So, this is primarily important when we’re talking real-time guarantees. Btw, OROCOS has implementations of such data structures that we might be able to re-use (not sure about licensing, but otherwise I see no issues).
Regarding the life-cycle: The OROCOS component life-cycle might be worth a look, it’s pretty simple but covers all the bases in one, common life-cycle.
We've been looking at OROCOS and OpenRTC (which is an implementation of the RTC standard which is also done by OMG: http://www.omg.org/spec/RTC/) for inspiration on how our life cycle should be. We haven't committed to just using one of the models as a standard just yet, but I wouldn't be surprised if we ended up using one of them. We've been in contact with the guys behind OpenRTC at AIST and discussed this topic with them at length.
As far as I can tell, these are virtually indistinguishable, as both are of the basic “initialized, active, inactive, error” variety.
One aspect that I couldn’t discern from the RTC spec, but which the OpenRTC guys surely must have handled, is what to do with ports during the “inactive” state. In Orocos RTT, “operations” (essentially, service ports) can be invoked in all states, but the component’s update thread is only invoked during the “started” state (what RTC would call active). This also means that data coming in on data flow ports is only handled during the active state.
In contrast, the RTC spec is a bit more ambiguous and only says “However, the behavioral contracts of such connections are dependent on the interfaces exposed by the ports and are not described normatively by this specification.” (section 5.4.2.3).
I think this aspect is fairly important and should be clarified. Distinguishing two kinds of ports, and putting that into the spec, like Orocos RTT does it, would be a good approach in my opinion.
There are also component models with hierarchical life-cycles (e.g., in robotics, SmartSoft comes to mind), but I believe that’s not necessary. If anybody wants hierarchical life-cycles, I’d be happy to provide more rationale against ;-)
I'm not familiar with SmartSoft, nor have I used hierarchical life cycles before. I for one would be interested in your opinion on that pattern vs. a flat hierarchy like what I assume is in OROCOS and OpenRTC.
SmartSoft currently considers the mode of operation (in RTC terminology) to be part of the state, and puts a hierarchy below the “active” state to realize. We’ve discussed this with them at length and I think we could convince them to treat this separately, much like RTC does it. This is most likely not reflected in their code, though.
Regarding intra-process data passing, I would only like to add that lock-free data passing is not primarily for improving performance. In fact, it can make performance worse in some cases. The real advantage is that they avoid a scheduling point in the kernel that could lead to context switches or (worse) priority inversion. So, this is primarily important when we’re talking real-time guarantees. Btw, OROCOS has implementations of such data structures that we might be able to re-use (not sure about licensing, but otherwise I see no issues).
This is the pivotal issue for our intra-process comms at the moment. If we can demonstrate that the message passing of shared pointer addresses meets the needs of hard or soft real-time situations, then the locking will be a matter of configuring or customizing the middleware.
I think this could be a problem, depending on exactly how it’s done. Does the C++ standard say anything about whether implementations *have* to use atomic compare and swap, or could an implementation also fall back to a mutex? The latter would be an issue because of the blocking semantics.
If you use a single shared_pointer object, the atomic_store and atomic_load methods could probably meet real-time requirements. However, I would think this is error-prone. If someone modifies the pointer using the “regular” way, it would be unsafe.
However, based our discussions I'm beginning to think that in order to meet all the varied needs, we'll probably have to consider doing custom intra-process comms and expose the threading and locking primitives as overridable parts of the Executor class. If we go this route, then reusing components from OROCOS or liblfds might be something we investigate. It should even be possible to avoid depending on them directly, but rather provide a package which contains a real-time and/or lock-free versions of the Executor, to be used in specific situations.
Orocos RTT makes this configurable on a per-port basis.
Personally, I think your idea of considering locking in conjunction with the threading is good. Something like RTC’s ExecutorContext (which is internally sequential) could be a reasonable approach to simplify this for the user. E.g., if components are only in one context, you don’t need locking. Otherwise you do, for those which are in multiple ones.
Cheers,
Ingo
--
Hi Bob,
I concur with most of your points, however, the difference between lock-free and std::mutex has nothing to do with CAS or not. It has to do with std::mutex having blocking semantics. This is always true, irrespective of whether it is implemented using a system call or using CAS.
cheers,
I agree on the need for some well-defined use cases. Otherwise, we are just
playing around with "wouldn't it be neat if"s. As fun as that is, it may not
necessarily lead to the best design.
For what it's worth, I've drafted a node life cycle document for
design.ros2.org. It's based on our experience and on recent discussions amongst
the people who handle this stuff at the OMG.
https://github.com/ros2/design/pull/34
The draft was written while I was lacking sleep somewhere over far-northern
Russia, so treat it like a 3AM piece of code. It's rambling, incomplete
(especially the composite node stuff), and possibly incoherent. Still, I hope
it can be a basis for moving this discussion forward to producing a design.
Please edit the article (add use cases!) as you see fit.
Geoff
--
You received this message because you are subscribed to the Google Groups "ROS SIG NG ROS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-ng-ro...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
[snip]I agree on the need for some well-defined use cases. Otherwise, we are just
playing around with "wouldn't it be neat if"s. As fun as that is, it may not
necessarily lead to the best design.
For what it's worth, I've drafted a node life cycle document for
design.ros2.org. It's based on our experience and on recent discussions amongst
the people who handle this stuff at the OMG.
https://github.com/ros2/design/pull/34