Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Solaris 10 - Bridging Network Interfaces?

269 views
Skip to first unread message

Michelle

unread,
Aug 22, 2007, 12:43:12 PM8/22/07
to
Someone mentioned to me that we can bridge the network interfaces
together on a solaris 10 (sparc) server and double the bandwith (and use
one IP).

This is possible with a stock SUN O/S install?

Argo Sõõru

unread,
Aug 22, 2007, 1:23:26 PM8/22/07
to
Michelle wrote:
> Someone mentioned to me that we can bridge the network interfaces
> together on a solaris 10 (sparc) server and double the bandwith (and use
> one IP).
I think Link Aggregation is what you need.
http://docs.sun.com/app/docs/doc/816-4554/6maoq01ne?l=en&q=Link+Aggregation&a=view

...but not every NIC are aggregation capable. Also you need switch with
IEEE 802.3ad Link Aggregation support. And you must configure this
aggregated switch ports.

Argo

Michelle

unread,
Aug 22, 2007, 1:34:45 PM8/22/07
to
thank you!

Darren Dunham

unread,
Aug 22, 2007, 3:43:42 PM8/22/07
to
Michelle <newsgrps_r...@mst.ca> wrote:
> Someone mentioned to me that we can bridge the network interfaces
> together on a solaris 10 (sparc) server and double the bandwith (and use
> one IP).

Sort of. Link Aggregation (and SunTrunking) do this, but neither will
double (or more) a single stream of data. If you're trying to (for
instance) make an rsync go twice as fast, this won't do it.

If you have muliple streams though, it should help quite a bit.

--
Darren Dunham ddu...@taos.com
Senior Technical Consultant TAOS http://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
< This line left intentionally blank to confuse you. >

James Carlson

unread,
Aug 22, 2007, 4:30:15 PM8/22/07
to
ddu...@taos.com (Darren Dunham) writes:
> Michelle <newsgrps_r...@mst.ca> wrote:
> > Someone mentioned to me that we can bridge the network interfaces
> > together on a solaris 10 (sparc) server and double the bandwith (and use
> > one IP).
>
> Sort of. Link Aggregation (and SunTrunking) do this, but neither will
> double (or more) a single stream of data. If you're trying to (for
> instance) make an rsync go twice as fast, this won't do it.
>
> If you have muliple streams though, it should help quite a bit.

Note also that IPMP in an active-active configuration with multiple
data addresses can do exactly the same thing.

As a bonus, it's not dependent on the Ethernet driver type or on
specific switch support. But it does require the allocation of
multiple IP addresses.

--
James Carlson, Solaris Networking <james.d...@sun.com>
Sun Microsystems / 1 Network Drive 71.232W Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677

Michelle

unread,
Aug 27, 2007, 12:59:11 PM8/27/07
to
I am in the process of configuring SUN Trunking 3.1 on an E2900 (using
two CE interfaces). What the network people tell me is that they cannot
enable this if we connect to two switches, but only if they are
connected to one switch.

We were wanting to use two switches if possible to provide redundancy on
that level.

Prior to moving ahead with SUN Trunking 3.1, I was approached with a
brochure on using internet protocol network multipathing...

http://www.sun.com/blueprints/1102/806-7230.pdf ...

but since the doc was so old, I opted with SUN Trunking instead. I am
wondering if the multipathing would let me use redundant switches.

The multipathing method accomplishes this without adding software
packages, and uses ifconfig to configure.

Has anyone had any experience with these methods to provide higher
bandwidth using multiple ethernet interfaces, and failover between them?

thanks!

gerryt

unread,
Aug 27, 2007, 1:13:50 PM8/27/07
to
On Aug 27, 9:59 am, Michelle <newsgrps_rem0ve_t...@mst.ca> top posts:

> I am in the process of configuring SUN Trunking 3.1 on an E2900 (using
> two CE interfaces). What the network people tell me is that they cannot
> enable this if we connect to two switches, but only if they are
> connected to one switch.

They are right.

> We were wanting to use two switches if possible to provide redundancy on
> that level.

Then you need 4 ce's I do believe. Or a quad ether card that supports
aggregation

> Prior to moving ahead with SUN Trunking 3.1, I was approached with a
> brochure on using internet protocol network multipathing...
> http://www.sun.com/blueprints/1102/806-7230.pdf...
> but since the doc was so old, I opted with SUN Trunking instead. I am
> wondering if the multipathing would let me use redundant switches.

Sun trunking is "older" than ipmp ...

> The multipathing method accomplishes this without adding software
> packages, and uses ifconfig to configure.

Yes we know

> Has anyone had any experience with these methods to provide higher
> bandwidth using multiple ethernet interfaces, and failover between them?

I have implemented both (actually sort of - I used aggregates on
T2000's)
but not at the same time on the same box.
James may chime and correct me in but I do believe you can "ipmp" 2
aggregates over
separate switches. If trunking supports it too then it should be in
the docs.

> Darren Dunham wrote:


> > Michelle <newsgrps_rem0ve_t...@mst.ca> wrote:
> >>Someone mentioned to me that we can bridge the network interfaces
> >>together on a solaris 10 (sparc) server and double the bandwith (and use
> >>one IP).
> > Sort of. Link Aggregation (and SunTrunking) do this, but neither will
> > double (or more) a single stream of data. If you're trying to (for
> > instance) make an rsync go twice as fast, this won't do it.
> > If you have muliple streams though, it should help quite a bit.

Concur : >

Michelle

unread,
Aug 27, 2007, 2:09:07 PM8/27/07
to
Thanks for the reply.

I am questioning the differences between trunking and the IPMP.

From the docs, IPMP only provides higher bandwidth in outgoing traffic,
whereas trunking would provide it in both outgoing and incoming traffic?

IPMP works across physical switches when only using two network
interfaces, whereas Trunking would require double that number of network
interfaces.

Also, Trunking provides loadbalancing options... would these be the
major differences?

Darren Dunham

unread,
Aug 27, 2007, 2:19:36 PM8/27/07
to
Michelle <newsgrps_r...@mst.ca> wrote:
> I am in the process of configuring SUN Trunking 3.1 on an E2900 (using
> two CE interfaces). What the network people tell me is that they cannot
> enable this if we connect to two switches, but only if they are
> connected to one switch.

Right. This a trunking/bonding protocol that relies on the specific
device to do the mux/demux. If you used separate switches, something
upstream not participating in the protocol would have to direct the
inbound traffic to each switch.

> We were wanting to use two switches if possible to provide redundancy on
> that level.

IPMP can provide interface/link/switch redundancy.

> Prior to moving ahead with SUN Trunking 3.1, I was approached with a
> brochure on using internet protocol network multipathing...
>
> http://www.sun.com/blueprints/1102/806-7230.pdf ...
>
> but since the doc was so old, I opted with SUN Trunking instead. I am
> wondering if the multipathing would let me use redundant switches.

Yes.

> The multipathing method accomplishes this without adding software
> packages, and uses ifconfig to configure.

Well, it's built in to the networking portions of the kernel.

> Has anyone had any experience with these methods to provide higher
> bandwidth using multiple ethernet interfaces, and failover between them?

IPMP is good for failover. It's a little more difficult to get it to do
load balancing on inbound traffic because you need client cooperation
(spreading across multiple addresses).

SunTrunking (and Link Aggregation in Solaris 10) are conceptually
simpler because you just have one virutal interface.

Both should give you failover at an interface level. Both will give you
higher throughput if you can parallelize your traffic sufficiently.
Neither will bump up a single stream much.

James Carlson

unread,
Aug 27, 2007, 3:03:14 PM8/27/07
to
Michelle <newsgrps_r...@mst.ca> writes:
> Thanks for the reply.
>
> I am questioning the differences between trunking and the IPMP.
>
> From the docs, IPMP only provides higher bandwidth in outgoing
> traffic, whereas trunking would provide it in both outgoing and
> incoming traffic?

No, that's not quite true.

IPMP's inbound load spreading depends on having multiple data
addresses. If you have multiple data addresses, then separate
outbound connections (to different destinations) will get separate
source addresses, causing the return traffic to be spread across the
interfaces. Also, if you have multiple data addresses in a group, you
can put all of those addresses into DNS as A records, and decent DNS
servers will round-robin requests for you, meaning that inbound
connections will be spread out as well.

It does take some effort to make all this work, though. 802.3ad has
the advantage that you don't need to do this at all -- you just need a
cooperative peer that can hash flows across the aggregation.

> IPMP works across physical switches when only using two network
> interfaces, whereas Trunking would require double that number of
> network interfaces.

Doing trunking across multiple switches requires switches that support
this mode of operation -- the switches will need to cooperate together
to work as a single peer.

I don't see where you'd get a requirement for a double set of
interfaces, though.

Darren Dunham

unread,
Aug 27, 2007, 3:42:48 PM8/27/07
to
Michelle <newsgrps_r...@mst.ca> wrote:
> Thanks for the reply.
>
> I am questioning the differences between trunking and the IPMP.
>
> From the docs, IPMP only provides higher bandwidth in outgoing traffic,
> whereas trunking would provide it in both outgoing and incoming traffic?

SunTrunking is a protocol between two devices. It allows the remote
device to use two paths for incoming traffic.

IPMP does not control remote devices. So any incoming traffic has to be
directed through normal means. In practice, this means that you'd want
to set up a second IP address on your other interface and find some way
to distribute your inbound traffic across both interfaces.

That's what I meant by suggesting that SunTrunking/Link Aggregation was
somewhat simpler to set up.

> IPMP works across physical switches when only using two network
> interfaces, whereas Trunking would require double that number of network
> interfaces.

Hm? IPMP works across interfaces. How you connect them to switches is
almost completely up to you.

SunTrunking and Link aggregation do not provide any bonding or failover
between links unless they go to the same switch. I'm not sure what you
mean by Trunking requiring double the number of interfaces...

Michelle

unread,
Aug 27, 2007, 3:44:05 PM8/27/07
to
I've had some luck getting Trunking to work on our E2900 using ce0 and
ce1. Removing and reinstall the network cable for ce1 works fine... it
is able to re-link after I reconnect the cable. However, ce0 won't
re-link after I remove the cable and reattach it. Any idea why this
would not relink?

Before tests:

Key: 0; Policy: 1;
Aggr MAC address: 0:3:ba:cb:e4:61
Name Original-Mac-Addr Speed Duplex Link Status
---- ----------------- ----- ------ ---- ------
ce0 0:3:ba:cb:e4:61 1000 full up enb
ce1 0:3:ba:cb:e4:62 1000 full up enb

Remove ce0 cable:

Key: 0; Policy: 1;
Aggr MAC address: 0:3:ba:cb:e4:61
Name Original-Mac-Addr Speed Duplex Link Status
---- ----------------- ----- ------ ---- ------
ce0 0:3:ba:cb:e4:61 1000 full down dis
ce1 0:3:ba:cb:e4:62 1000 full up enb

Reinstall ce0 cable:

Key: 0; Policy: 1;
Aggr MAC address: 0:3:ba:cb:e4:61
Name Original-Mac-Addr Speed Duplex Link Status
---- ----------------- ----- ------ ---- ------
ce0 0:3:ba:cb:e4:61 1000 full down dis
ce1 0:3:ba:cb:e4:62 1000 full up enb


Reinstall ce1 cable - network relinks.

Key: 0; Policy: 1;
Aggr MAC address: 0:3:ba:cb:e4:61
Name Original-Mac-Addr Speed Duplex Link Status
---- ----------------- ----- ------ ---- ------
ce0 0:3:ba:cb:e4:61 1000 full up enb
ce1 0:3:ba:cb:e4:62 1000 full up enb

Remove ce1 cable:

Key: 0; Policy: 1;
Aggr MAC address: 0:3:ba:cb:e4:61
Name Original-Mac-Addr Speed Duplex Link Status
---- ----------------- ----- ------ ---- ------
ce0 0:3:ba:cb:e4:61 1000 full up enb
ce1 0:3:ba:cb:e4:62 1000 full down dis

Reinstall ce1 cable - network relinks.

Key: 0; Policy: 1;
Aggr MAC address: 0:3:ba:cb:e4:61
Name Original-Mac-Addr Speed Duplex Link Status
---- ----------------- ----- ------ ---- ------
ce0 0:3:ba:cb:e4:61 1000 full up enb
ce1 0:3:ba:cb:e4:62 1000 full up enb

James Carlson

unread,
Aug 28, 2007, 6:37:03 AM8/28/07
to
Michelle <newsgrps_r...@mst.ca> writes:
> I've had some luck getting Trunking to work on our E2900 using ce0 and
> ce1. Removing and reinstall the network cable for ce1 works
> fine... it is able to re-link after I reconnect the cable. However,
> ce0 won't re-link after I remove the cable and reattach it. Any idea
> why this would not relink?

This sounds a bit like CR 6329913. You should contact support.

0 new messages