I am actually studying DDS release 4.1 "090617" running on Ubuntu 9.04.
My question is about the multicast usage.
As fas as I have understood what happens on the wire, all the data
published by the system publishers are sent on the wire using either
broadcast packet (the default case) either a
class D address according to the config file. On all hosts where any
subscribers are running, the network service (ddsi or networking) has to
listen on this multicast address.
This means that these processes (ddsi or networking) will receive ALL
the published data even if they there is no subscribers interested in
this specific published data on the host.
I guess filtering will be done by these network services processes and
only the relevant data will be forwarded to the user subscribers processes.
My guess is that this scenario is true even if the data transfered
within the topic is rather large (several MByte) and all the hosts with
subscribers will receive these Mbytes
even if they don't need them.
Is this correct ?
Thank's for your answers
Emmanuel Taurel
_______________________________________________
OpenSplice DDS Developer Mailing List
Deve...@opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
Hi Emmanuel,
Thanks for you inquiry, let me explain how you can exploit both DDS features (partitions) as well as OpenSplice DDS features (transparent and conditional/dynamic mapping of DDS-partitions to pre-configured OpenSplice ‘Network Partitions’) to balance between networking(-efficiency) and processing(-efficiency).
Just as a reminder, in DDS, communication takes place between typed dataWriters and dataReaders (that ‘share’ the same Topic name and type as well as have compatible QoS-policies) that belong to Publishers respectively Subscribers that both are ‘connected’ or ‘related’ to a matching set of DDS Partitions (see also section 7.1.3.13 of OMG-DDS Rev1.2).
So a DDS Partition is a elementary concept in DDS in that it defines a logical ‘dataspace’ in which data (topic samples) live. So even if you’re not explicitly utilizing this in your application, you’ll actually be using the ‘default partition’ (the zero-length default sequence of partitions).
Whereas DDS ‘Domains’ provide a physical separation between DDS runtimes (e.g. related to physical resources used for communication such as addresses, ports, etc), the DDS ‘Partition’ concept allows for a fine-grained and dynamic logical grouping (or separation as its inverse) of published/subscribed information.
With that in mind, lets return to your question about multicasting and ‘communication granularity’. Lets start with the purely physical view: assuming the system is set of computing nodes that are interconnected some sort of network, a publish/subscribe based middleware such as DDS should benefit from features of the physical network e.g. by exploiting its multicast capabilities to efficiently distribute data from a publishing node to a (dynamic set of emerging-) subscribing nodes.
Now there are different ways and related granularity to actually do this. In OpenSplice DDS we’ve implemented a networking-architecture where we schedule information-transfer between nodes based upon Importance (TRANSPORT_PRIORITY QoS policy that drives the ‘determinism’ of communication) and Urgency (LATENCY_BUDGET QoS policy that drives the ‘efficiency’ of communication) utilizing a pluggable networking-service that exploits multiple priority-lanes (‘network-channels’ with matching Tx/Rx processing and traffic-shaping) to dynamically (based upon actual QoS-policy values) balance between ‘low-latency’ and ‘high-volume’.
The answer to you specific question can now be found in how we’re exploiting the standardized ‘logical’ DDS partition concept by allowing a dynamic mapping of “topic.partition combinations” on to what we’ve called OpenSplice DDS ‘Network Partitions’ that basically are named ‘physical’ multicast-groups. Your assessment is basically right that when a publishing node multicasts data on the wire, it will be picked-up by those nodes that have joined that multicast-group (whereafter such a received UDP-fragment needs to be de-serialized and delivered to all relevant DDS Subscribers and their dataReaders on that physical node).
Conversely, physical nodes that have NOT joined the multicast-group will not be ‘bothered’ with that data (i.e. filtered out by hardware).
So how can you benefit from this architecture in a situation (as you described below) where there might be very large topics that you don’t want to be distributed to nodes that don’t have DDS subscribers ? You can probably guess the answer: by exploiting the DDS-partition concept in combination with the OpenSplice DDS feature to optimize the efficiency by allowing logical DDS partitions to be mapped on physical multicast groups.
Finally, to be very explicit about who needs to do what when:
1. system engineers that do information modeling and (thus) are aware about the syntax (types/structure) and semantics (meaning and non-functionals including scoping, urgency, importance etc) of related pieces of data (i.e. the DDS Topics) can identify the need for using for instance a dedicated logical DDS-partition for ‘high-volume data’. There are MDE tools available (such as our OpenSplice DDS Powertools) to graphically model the topics, partitions and all other related (system-wide) QoS policies.
2. application developers can then re-use those models (or knowledge) w.r.t. QoS-annotated topics and partitions when implementing their business-logic (so basically don’t need to be aware of both system-level information - i.e. the topic-level QoS-policies for distribution/delivery - nor need they be aware of deployment-level mappings of these on to physical processing/networking resources such as multicast-groups, priority-lanes, packing, best-effort/reliable UDP-multicasting, etc.). In this way the DDS abstractions allow application-developers to fully concentrate on ‘their’ domain-issues
3. system integrators that ARE aware about the actual deployment-environment as well as of the desired (system-wide) logical partitioning (and delivery and persistence requirements for that matter) can define (and/or even tune ‘at runtime’) the optimal (yet to the applications fully transparent) mapping between:
a. logical DDS-partitions (such as the high-volume partition) and an OpenSplice DDS ‘Network-partition’ i.e. physical multicast-group (which includes the ability to specify that on nodes that don’t have related subscribers attached to the matching logical DDS partitions, the physical ‘Network-partition’ will not be connected i.e. will NOT join the multicast-group)
b. (Logical) Information ‘importance’ (as expressed by the actual/runtime DDS TRANSPORT_PRIORITY QoS policy value) and the selection of the appropriate physical priority-band (OpenSplice DDS ‘NetworkChannel’ also supporting the real-time pre-emption of other information queued in that channel)
c. (logical) information ‘urgency’ (as expressed by the actual/runtime DDS LATENCY_BUDGET QoS policy value) and the physical packing of samples of multiple topics from multiple publishers (for a certain partition-set) on a node into UDP-frames (of configurable ‘fragment-size’) and their actual distribution based upon configurable traffic-shaping, reliability and reactivity of the specific network channel.
Concluding: known characteristics of information such as topic-size can be utilized in DDS by creating appropriate logical ‘partitions’ for that ‘kind’ of data which will determine the boundaries of the sharing and distribution of data published-in and subscribed-to those partitions. OpenSplice DDS has optimized this ‘logical’ feature by allowing it to be mapped on the ‘physical’ multicasting features of modern networks such that only specific sets of nodes (those that have their ‘network-partition’ actually joined to the related multicast-group) have the relevant data distributed to them.
Hope this explains a little,
Thanks,
Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.v...@prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
Thank's very much Hans for your answer.
I have immediately tried this feature but I am not able to send data on
a specific partition.
I have modified my ospl.xml file in a way that according to my
understanding, I should now have 3 network partitions (1 Global and 2
others)
I have also done some one to one mapping between DDS partitions and
OpenSplice network partitions using the PartitionMapping element in the
ospl.xml file.
My ospl.xml file is attached to this e-mail.
Then, I have changed my publisher code that it sends its data using one
of my DDS partition (the one called "p2").
I have done this by modifying the partition QoS of the PublisherQos
structure passed to the method create_publisher() of my
DomainParticipant instance.
According to my ospl.xml file, I was waiting for my data to be published
using the 226.20.21.22 class D address.
But Wireshark tells me that in fact they are published using the
226.10.11.12 class D address which is the address I have given for the
Global partition.
What am I doing wrong?
BTW, I have also seen in the Deployment Guide chapter 3.9.3 about DDSI,
that thi snotion of network partition is not yet implemented for the
DDSI protocol.
Do you have an idea if it will be implemented one day and if yes, when?
Once more, thank's for your answers
Emmanuel Taurel
Hans van't Hag wrote:
>
> Hi Emmanuel,
>
> Thanks for you inquiry, let me explain how you can exploit both DDS
> features (partitions) as well as OpenSplice DDS features (/transparent
> and conditional/dynamic mapping of DDS-partitions to pre-configured
> OpenSplice ‘//Network Partitions’/) to balance between
> *networking*(-efficiency) and *processing*(-efficiency).
>
> Just as a reminder, in DDS, communication takes place between typed
> dataWriters and dataReaders (that ‘share’ the same Topic name and type
> as well as have compatible QoS-policies) that belong to Publishers
> respectively Subscribers *that both are ‘connected’ or ‘related’ to a
> matching set of DDS **Partitions** *(see also section 7.1.3.13 of
> OMG-DDS Rev1.2).
>
> So a DDS Partition is a elementary concept in DDS in that it defines a
> logical ‘/dataspace’/ in which data (topic samples) live. So even if
> you’re not explicitly utilizing this in your application, you’ll
> actually be using the ‘default partition’ (the zero-length default
> sequence of partitions).
>
> Whereas DDS ‘Domains’ provide a *physical* separation between DDS
> runtimes (/e.g. related to physical resources used for communication
> such as addresses, ports, etc/), the DDS ‘Partition’ concept allows
> for a fine-grained and dynamic logical grouping (or separation as its
> inverse) of published/subscribed information.
>
> With that in mind, lets return to your question about multicasting and
> ‘/communication granularity’/. Lets start with the purely physical
> view: assuming the system is set of computing nodes that are
> interconnected some sort of network, a publish/subscribe based
> middleware such as DDS should benefit from features of the physical
> network e.g. by exploiting its /multicast/ capabilities to efficiently
> distribute data from a publishing node to a (/dynamic set of
> emerging-)/ subscribing nodes.
>
> Now there are different ways and related granularity to actually do
> this. In *OpenSplice DDS* we’ve implemented a networking-architecture
> where we schedule information-transfer between nodes based upon
> */Importance/* (TRANSPORT_PRIORITY QoS policy that drives the
> ‘/determinism’/ of communication) and */Urgency/* (LATENCY_BUDGET QoS
> policy that drives the ‘/efficiency’/ of communication) utilizing a
> pluggable networking-service that exploits multiple priority-lanes
> /(‘network-channels’ with matching Tx/Rx processing and
> traffic-shaping/) to dynamically (/based upon actual QoS-policy
> values/) balance between ‘*/low-latency’/* and */‘high-volume’./*
>
> The answer to you specific question can now be found in how we’re
> exploiting the standardized ‘/logical’/ DDS partition concept by
> allowing a dynamic mapping of “topic.partition combinations” on to
> what we’ve called OpenSplice DDS ‘Network Partitions’ that basically
> are named ‘/physical’/ multicast-groups. Your assessment is basically
> right that when a publishing node multicasts data on the wire, it will
> be picked-up by those nodes that have joined that multicast-group
> (/whereafter such a received UDP-fragment needs to be de-serialized
> and delivered to all relevant DDS Subscribers and their dataReaders on
> that physical node)./
>
> Conversely, physical nodes that have NOT joined the multicast-group
> will not be ‘/bothered’/ with that data (i.e. filtered out by hardware).
>
> So how can you benefit from this architecture in a situation (as you
> described below) where there might be very large topics that you don’t
> want to be distributed to nodes that don’t have DDS subscribers ? You
> can probably guess the answer: by exploiting the DDS-partition concept
> in combination with the OpenSplice DDS feature to optimize the
> efficiency by allowing /logical/ DDS partitions to be mapped on
> /physical/ multicast groups.
>
> Finally, to be very explicit about who needs to do what when:
>
> 1. *system engineers* that do /information modeling/ and (thus) are
> aware about the *syntax* (types/structure) and *semantics* (meaning
> and non-functionals including scoping, urgency, importance etc) of
> related pieces of data (/i.e. the DDS //Topics/) can identify the need
> for using for instance a dedicated logical DDS-partition for
> ‘/high-volume data’/. There are MDE tools available (/such as our
> OpenSplice DDS Powertools/) to graphically model the topics,
> partitions and all other related (system-wide) QoS policies.
>
> 2. *application developers* can then re-use those models (or
> knowledge) w.r.t. QoS-annotated topics and partitions when
> implementing their business-logic (so basically *don’t *need to be
> aware of both system-level information - i.e. the topic-level
> QoS-policies for distribution/delivery - *nor* need they be aware of
> deployment-level mappings of these on to physical
> processing/networking resources such as multicast-groups,
> priority-lanes, packing, best-effort/reliable UDP-multicasting, etc.).
> In this way the DDS abstractions allow application-developers to fully
> concentrate on ‘their’ domain-issues
>
> 3. *system integrators* that ARE aware about the actual
> deployment-environment as well as of the desired (system-wide) logical
> partitioning (/and delivery and persistence requirements for that
> matter/) can define (and/or even tune ‘at runtime’) the optimal (*/yet
> to the applications fully transparent)/* mapping between:
>
> a. *logical* DDS-partitions (such as the high-volume partition) and an
> OpenSplice DDS ‘Network-partition’ i.e. *physical* multicast-group
> (/which includes the ability to specify that on nodes that don’t have
> related subscribers attached to the matching logical //DDS
> partitions//, the physical ‘//Network-partition’// will *not* be
> connected i.e. will *NOT* join the multicast-group)/
>
> b. (*Logical*) Information ‘/importance’/ (as expressed by the
> actual/runtime DDS TRANSPORT_PRIORITY QoS policy value) and the
> selection of the appropriate *physical* priority-band (/OpenSplice DDS
> ‘//NetworkChannel’ //also supporting the real-time pre-emption of
> other information queued in that channel/)
>
> c. (*logical*) information ‘/urgency’/ (as expressed by the
> actual/runtime DDS LATENCY_BUDGET QoS policy value) and the *physical*
> packing of samples of multiple topics from multiple publishers (for a
> certain partition-set) on a node into UDP-frames (of configurable
> ‘fragment-size’) and their actual distribution based upon configurable
> /traffic-shaping, reliability /and/ reactivity/ of the specific
> network channel.
>
> *Concluding*: known characteristics of information such as topic-size
> can be utilized in DDS by creating appropriate logical ‘partitions’
> for that ‘/kind’/ of data which will determine the boundaries of the
> sharing and distribution of data published-in and subscribed-to those
> partitions. *OpenSplice DDS* has optimized this ‘/logical’/ feature by
> allowing it to be mapped on the ‘/physical’/ multicasting features of
> ------------------------------------------------------------------------
The attribute DCPSPartitionTopic of PartitionMapping in your ospl.xml
doesn't have the correct syntax, which causes your problem. This
attribute should be a valid partitionTopic Expression.
A partitionTopic Expression consist of a partitionname and a topicname,
separated by a '.', either part can optionally be replaced by the '*'
wildcard.
Valid partitionTopic expressions are:
PartitionFoo.TopicBar
PartitionFoo.*
*.TopicBar
*.*
So the <partitionMappings> section of your ospl.xml should look like
this:
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="p2.*" NetworkPartition="part-2"
/>
<PartitionMapping DCPSPartitionTopic="p3.*" NetworkPartition="part-3"
/>
</PartitionMappings>
Hope this helps,
Patrick
Thank's for the advice. I have followed it carefully. I have changed my
DCPSPartitionTopic element value from "p2" to "p2.*" but Wireshark still
tell me that the data
are sent using IP address 226.10.11.12 instead of 226.20.21.22 !!
Seems that the "problem" is still there.
Emmanuel
Could you supply the source of your test application and, if generated,
the ospl-info.log and ospl-error.log files. This will enable us to
further analyse and explain the observed behavior.
Regards,
No problem.
Within this mail, you will find as attached files:
Reader.cpp which is the subscriber source file
Simpler.cpp which is the publisher source file
ospl-info.log from the publisher host
ospl-info.sub.log from the subscriber host
mcast-dds which is the Wireshark network capture file
The publisher is running on a Ubuntu 9.04 box (32 bits)
The subscriber is also running on a Ubuntu box but 8.10. It's a 64 bits
computer but I do not compile on it. I always compile on the 32 bits
computer
Good luck
Emmanuel Taurel
These warnings in the ospl-info.log:
==========================================================
Report : WARNING
Date : Mon Jul 13 16:00:29 2009
Description : Mapping onto unknown network partition 'part-2' did not
succeed, using the default partition
Node : pcantares
Process : networking (23494)
Thread : main thread b7df1990
Internals : V4.1.090617/networking: creating partition
mapping/nw_partitions.c/329/0/345827891
=========================================================
Report : WARNING
Date : Mon Jul 13 16:00:29 2009
Description : Mapping onto unknown network partition 'part-3' did not
succeed, using the default partition
Node : pcantares
Process : networking (23494)
Thread : main thread b7df1990
Internals : V4.1.090617/networking: creating partition
mapping/nw_partitions.c/329/0/346005624
=========================================================
These indicate that the part-2 and part-3 network partitions were not
configured properly. Closer inspection shows indeed a problem in the
configuration of those network-partitions in ospl.xml. The "Name"
attribute of the <NetworkPartition> should be written with a Capital
'N'.
<NetworkPartitions>
<NetworkPartition Name="part-2" Address="226.20.21.22" />
<NetworkPartition Name="part-3" Address="226.30.31.32" />
</NetworkPartitions>
Hope this helps,
Patrick
> -----Original Message-----
> From: develope...@opensplice.org [mailto:developer-
> bou...@opensplice.org] On Behalf Of Emmanuel TAUREL
> Sent: 13 July 2009 16:13
> To: OpenSplice DDS Developer Mailing List
> Subject: Re: [OSPL-Dev] DDS and multicast
>
> Hi Patrick,
>
> No problem.
> Within this mail, you will find as attached files:
>
> Reader.cpp which is the subscriber source file
> Simpler.cpp which is the publisher source file
> ospl-info.log from the publisher host
> ospl-info.sub.log from the subscriber host
> mcast-dds which is the Wireshark network capture file
>
> The publisher is running on a Ubuntu 9.04 box (32 bits)
> The subscriber is also running on a Ubuntu box but 8.10. It's a 64
bits
> computer but I do not compile on it. I always compile on the 32 bits
> computer
>
> Good luck
>
> Emmanuel Taurel
>
_______________________________________________
Bingo, that was it.
By the way, do you have an idea if similar features will be implemented
for the DDSI protocol?
If yes, do you already know when it will be available?
Sincerely
Emmanuel