Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DEGXA-TA versus DEGXA-TB performance

165 views
Skip to first unread message

Rich Jordan

unread,
Nov 24, 2015, 1:43:28 PM11/24/15
to
One of our customer sites is retiring their Alphaserver (and the spare on the shelf) and offered me any parts I wanted before they recycle (they are larger systems). We're still running a test DS10 and DS10-L, and I have a customer with a DS15, all still running 100Mbit ethernet.

The site has one DEGXA-TA and one -TB card. Is there any actual measurable performance difference between the two, installed in either the DS10 or the DS15 system? The only reference I found is that the -TB is a later revision of the base Broadcom card, and that per the user guide:


" Both sets of NICs can be plugged into either a PCI or PCI-X I/O bus; however, the (-SA & -TA) can only be operated in PCI-mode when installed in a PCI-X slot on an HP AlphaServer or AlphaStation platform. The -SA and -TA are configured by the Console during power-up to only operate in PCI mode. The -SB and -TB are configured by the console to match the maximum operating characteristics of the PCI-X or PCI slot they are plugged into. "

Does this translate into any measurable performance benefit with either of the two systems? I'm taking the cards regardless.

Thanks
Rich

Stephen Hoffman

unread,
Nov 24, 2015, 3:12:56 PM11/24/15
to
On 2015-11-24 18:43:26 +0000, Rich Jordan said:

> ...DEGXA-TA and one -TB card... Does this translate into any
> measurable performance benefit with either of the two systems?

AFAIK, no.

I'd suspect that the available bandwidth will probably be more limited
by the bandwidth of the OpenVMS server and its network stack, too. Not
by the NIC.

See previous discussions of the differences between Linux and OpenVMS
network performance here in the comp.os.vms newsgroup for related
details.

See this thread:
<https://groups.google.com/d/msg/comp.os.vms/tBFVFWsjak4/xCSSF05xrdkJ>

Semi-related, with links to Linux 10 GbE technical details:
<http://labs.hoffmanlabs.com/node/840>


--
Pure Personal Opinion | HoffmanLabs LLC

Hans Vlems

unread,
Nov 24, 2015, 4:56:03 PM11/24/15
to
Would they part out the memory as well?
If so, what model AlphaServer is it?
Hans

Steven Schweda

unread,
Nov 24, 2015, 5:11:03 PM11/24/15
to
My "DEGXA" cards are generic Broadcom (or other-vendor)
cards with their IDs altered, so I know nothing, but, around
here, at start-up, an XP1000 says:

[...]
OpenVMS (TM) Alpha Operating System, Version V8.4
[...]
%EWA0, Auto-negotiation mode set by console
%EWA0, Auto-negotiation (internal) starting
%EWB0, Auto-negotiation mode assumed set by console
%EWB0, Jumbo frames enabled per system parameter LAN_FLAGS bit 6
%EWB0, DEGXA-TB located in 64-bit, 33-mhz PCI slot
%EWB0, Device type is BCM5703C (UTP) Rev B0 (11000000)
%EWB0, Link up: 1000 mbit, full duplex, flow control (txrx)
[...]

So, someone seems to think that it's a "-TB". Does a
"-TA" have a chip different from "BCM5703C"? Knowing
nothing, I'd expect the bus capabilities to be determined
more by the chip than anything else (assuming that it's
attached to all the bus wires it can be).

The old, stable (that is, stagnant) XP1000 console
firmware knows only:

>>>show config
[...]
Slot Option Hose 0, Bus 0, PCI
[...]
13 16C714E4/601B0E11

(And _I_ determined the last half of that.)



> I'd suspect that the available bandwidth will probably be
> more limited by the bandwidth of the OpenVMS server and its
> network stack, too. Not by the NIC.

I'm with him, so far as the PCI question, but I believe
that a gigabit card did actually improve things over the
built-in 100MHz DE500 (equivalent).

Stephen Hoffman

unread,
Nov 24, 2015, 5:29:03 PM11/24/15
to
On 2015-11-24 22:11:00 +0000, Steven Schweda said:

>> Hoff: I'd suspect that the available bandwidth will probably be more
>> limited by the bandwidth of the OpenVMS server and its network stack,
>> too. Not by the NIC.
>
> I'm with him, so far as the PCI question, but I believe that a
> gigabit card did actually improve things over the built-in 100MHz DE500
> (equivalent).

That I'd believe. 100 MbE is slow. Any recent Wi-Fi runs
substantially faster than those old 100 MbE NICs. Things get more
interesting as the NIC speeds increase. The Linux articles referenced
up-thread provide some insight and details around this; where 10 GbE
and faster are (unsurprisingly) much harder for an OS to deal with than
lower speeds. 40 GbE NICs are already commonly available, too.
(Though such are not supported on OpenVMS, AFAIK.) Network bandwidth
being another area of OpenVMS that VSI will eventually and almost
inevitably be looking at — and particularly with their network drivers
and that new IP stack — as not getting most of 'n' Gb out of a
supported 'n' GbE NIC won't be very popular with their customers.

Rich Jordan

unread,
Nov 24, 2015, 7:13:26 PM11/24/15
to
I'll ask.

David Froble

unread,
Nov 24, 2015, 7:41:26 PM11/24/15
to
And here I was feeling ok with my 10baseT and 10base2 stuff ....

Then you guys spoil it ....

:-(

terry+go...@tmk.com

unread,
Nov 25, 2015, 2:00:46 AM11/25/15
to
On Tuesday, November 24, 2015 at 7:41:26 PM UTC-5, David Froble wrote:
> And here I was feeling ok with my 10baseT and 10base2 stuff ....
>
> Then you guys spoil it ....
>
> :-(

(0:279) host1:~terry# iperf -c host2
------------------------------------------------------------
Client connecting to host2, TCP port 5001
TCP window size: 32.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.20.30.40 port 49975 connected with 10.20.30.41 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.5 GBytes 9.84 Gbits/sec

8-}

Stephen Hoffman

unread,
Nov 25, 2015, 2:11:53 PM11/25/15
to
On 2015-11-25 07:00:43 +0000, terry+go...@tmk.com said:

> On Tuesday, November 24, 2015 at 7:41:26 PM UTC-5, David Froble wrote:
>> And here I was feeling ok with my 10baseT and 10base2 stuff ....
>
> [ 3] 0.0-10.0 sec 11.5 GBytes 9.84 Gbits/sec

Yeah; 10 GbE is handy. Need a box with built-in 10 GbE, a decent PCIe
config or Thunderbolt 2 connection for that, though. Not an option
with OpenVMS below rx2660, IIRC.

On an older Wi-Fi connection to a GbE-connected server, this as I'd
mentioned Wi-Fi being faster than 100 GbE, with iperf3...

[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 198 MBytes 166 Mbits/sec sender
[ 4] 0.00-10.00 sec 198 MBytes 166 Mbits/sec receiver

Stephen Hoffman

unread,
Nov 25, 2015, 2:32:23 PM11/25/15
to
On 2015-11-25 19:11:51 +0000, Stephen Hoffman said:

> faster than 100 GbE

...100 MbE...

MG

unread,
Nov 26, 2015, 11:43:38 AM11/26/15
to
Op 25-nov-2015 om 20:11 schreef Stephen Hoffman:
> Yeah; 10 GbE is handy. Need a box with built-in 10 GbE, a decent PCIe
> config or Thunderbolt 2 connection for that, though. Not an option
> with OpenVMS below rx2660, IIRC.

133-MHz 64-bit PCI-X ought to sustain it, too (well, in theory anyway),
with a capable 10-Gbit Ethernet NIC; like I ran in my rx2600s, rx2620s
and DS15. But like you, or someone else, remarked earlier: Don't expect
it under VMS (or not after spending an eternity of tuning things to the
extreme).

- MG

Hans Vlems

unread,
Nov 26, 2015, 5:53:10 PM11/26/15
to
Correct Marco, I got around 90 Mb/s on a large ftp transfer between a DS10 (6/466) and an XP1000 both fitted with a DEGXA-TA. OTOH with just FastFD ethernet it would have been a lot slower...
Hans
0 new messages