I am in process of migrating our exisitng network drivers from END
driver method to IPNET drivers (from VxWorks 6.5 to VxWorks 6.7. vxBus
support already exists in the current drivers)
VxWorks gives a very good documentation on what is to be done for
migration etc.
Before I venture to this, I want to understand what will I gain.
The only thing that I understood is that the buffer management method
is different in IPNET as compared to END driver design which is based
on BSD. VxWorks migration guide and device driver development guide
mention that the performance will be improved if we use the IPNET. I
did not understand how it actually enhances the performance.
Does this really give the performance enhancement?
Any pointers to some documentation on IPNET versus END driver will
help me.
Thanks,
Vidhu
I'm going to give a little background on this, just to make sure we're
on the same page.
When VxWorks first acquired networking, it was more or less a port of
the BSD code, including the BSD ifnet-style driver model. But there
was a perceived problem with this: some customers wanted to use either
a 3rd party TCP/IP stack, or some other kind of network protocol
entirely, but the interface drivers could not be separated from the
stack since the BSD ifnet model tightly couples the driver code to the
TCP/IP stack code via several shared data structures and APIs.
Wind River's solution to this was to introduce the END API and the
MUX. The MUX has the protocol stack on one side and the END driver
API, which its own driver-private data structures, on the other. A
properly written END driver has no implicit knowledge of the protocols
layered on top of it, and is thus stack-independent. This gives
customers the option to remove the Wind River TCP/IP stack while still
being able to use the Wind River provided ethernet driver support.
The following things have happened in various releases:
VxWorks 5.4: ships with IPv4 only TCP/IP stack, MUX/END API supported,
but older ifnet drivers supported to
some extent as well
VxWorks 5.5.1: ships with IPv4 only TCP/IP stack, ifnet driver model
deprecated, IPv6-enabled stack available as
separate product
VxWorks 6.0: IPv6-enabled BSD code base now the default, ifnet driver
model defunct, some new stack
features (TCP/IP checksum offload)
VxWorks 6.4: still the same BSD-based code base, introduction of VxBus
driver mode
VxWorks 6.5: BSD-based TCP/IP stack removed and replaced with IPNET
TCP/IP stack after Wind River
acquisition of Interpeak -- VxWorks uses IPNET
from now on
VxWorks 6.6: More enhancements to VxBus, more VxBus END drivers added
VxWorks 6.7: Still more enhancements to VxBus, still more VxBus END
drivers added, introduction of the
END2 API (see below)
Note that the change from the BSD TCP/IP stack to IPNET actually has
no effect on END drivers themselves. A properly written END driver
will be source compatible with either stack (though it will may need
to be recompiled). There were a few cases where some END drivers
#included header files that were specific to the BSD code base, but
these were largely holdovers from when earlier ifnet drivers were
converted into END drivers (very often the #included headers weren't
even necessary and the extraneous #includes could just be deleted
without any ill effects). The removal of these extra #includes was all
that was necessary to "port" them to the IPNEt stack. In fact, IPNET
is largely designed to use whatever native driver model the underlying
OS provides.
The main difference between the IPNET stack and the older BSD stack is
the use of mBlks. The IPNET code is designed to be OS-independent (and
can in fact be used on several different OSses). As a consequence, it
uses its own internal buffer management scheme and has its own
structures for describing a packet. The IFNET design always
encapsulates a packet in a single contguous buffer. In contrast, the
BSD design allows a packet to be fragmented across multiple buffers,
which are linked together in an mBlk chain. END drivers still use the
mBlk model and are ignorant of IPNET's internal mechanisms. There is a
translation layer immediately above the MUX where the IPNET stack's
internal buffers are converted to mBlks and vice-versa: this
translation layer keeps the drivers isolated from IPNET's internals,
but it also introduces a little bit of overhead for each packet
received and sent.
What this means is that if you have an existing END driver from
VxWorks 6.4 or earlier, you shouldn't need to do anything to use it
with VxWorks 6.5 and later besides just recompile the code. There are
two things you'll need to check for:
- As mentioned before, if your driver #includes any header files that
were internal to the BSD stack, you'll have
to remove those #includes because those files are no longer
available
- Because the IPNET stack requires packets to be unfragmented, you
must ensure that your drivers always
pass a single mBlk to the MUX via END_RCV_RTN_CALL(), rather than an
mBlk chain. Most drivers always
uses a single buffers, but I've seen one or two exceptions.
The END2 API introduced in VxWorks 6.7 is designed to address the
performance issues that can arise due to the translation layer that I
mentioned previously. Although the code is careful to avoid actual
data copies, there is some work involved in having to convert back and
forth between the IPNET stack's internal data representation and the
mBlk model used by traditional END drivers. The END2 model is a
hybrid approach that allows the interface driver to use the stack's
internal data structures in order to avoid the translation stage. For
example, instead of having to convert from an IPNET_PACKET to an mBlk
when transmitting a packet, the driver's send routine is just given
the IPNET_PACKET directly. This avoids having to encapsulate the
IPNET_PACKET's buffer in an mBlk/clBlk first, as well as having to
release the mBlk/clBlk later. For receive, the driver also provides
received packets to the stack directly as IPNET_PACKETs instead of
mBlk tuples.
The performance difference will be most noticable with applications
where there is a high frame rate, such as IP forwarding: by reducing
how many cycles it takes to move a frame back and forth between the
driver and the stack, you reduce CPU overhead and increase the number
of frames per second that can be processed. There may also be a
reduction in CPU overhead in some streaming apps as well, though
throughput for streaming apps may not appear to change much. VxWorks
6.7 comes with some commonly used drivers in both END and END2 form
as examples (notably the Intel PRO/1000 "gei" driver, and the
Freescale TSEC/eTSEC drivers). If possible, you might try running some
benchmarks between the two drivers to compare the difference(s) in
performance. Swapping between them is fairly easy: you can just remove
the INCLUDE_GEI_VXB_END component and add the INCLUDE_GEI_VXB_END2
component in its place, and rebuild your image.
So, if you're considering producing END2 versions of your drivers, I
guess the thing to consider is the intended application. If your
concern is achieving the highest possible frame rate, then it may be
worth it to you. On the other hand, if you're interested in
portability across different VxWorks releases, it may be more valuable
to stick with the existing END model.
I hope this helps.
-Bill
Sorry for replying late. I was on vacation.
Thanks Bill. That was a very good information.
I had one question related to Zero Copy.
Is there any difference in the way Zero copy feature implementation in
VxWorks 6.5 to 6.7 from driver implemetation perspective?
Thanks,
Vidhu