Adding x410 nodes to an existing 7.2.0.5 cluster with different memory amount

205 views
Skip to first unread message

Kenneth Van Kley

unread,
Dec 8, 2016, 12:35:53 PM12/8/16
to Isilon Technical User Group
We have an existing x410 cluster running 7.2.0.5.  Each node has 64-gig of memory.

Per vendor recommendation, we'll be adding 8 new nodes (each with 256-gig of memory) and then upgrading the existing 8 nodes from 64-gig to 256-gig each.

Since smartpools will create a new pool for the new config, what happens to the data on the old nodes after we upgrade?  Does it have to re-protect the data and move it all around or does it just leave it where it is and change what pool it's part of?

Does the old pool just disappear once we've upgraded everything?


Saker Klippsten

unread,
Dec 8, 2016, 12:51:22 PM12/8/16
to isilon-u...@googlegroups.com
Interesting. I won't get into the "per vendor recommendation" part... But I know you can upgrade your existing nodes pretty easily from 64GB 256GB. Why not do that first? Before adding new nodes and thus eliminating the two pool scenario?

It's  a change in the config file via a download they give you. Shutdown node. Add memory . Turn on node run script CTO config again.. this uploads a file to Isilon to update their records. Done.

Then add the new Nodes and let auto balance do its thing..

-s





Sent from my iPhone
--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Peter Serocka

unread,
Dec 9, 2016, 2:17:46 AM12/9/16
to isilon-u...@googlegroups.com

On 2016 Dec 9. md, at 01:51 st, Saker Klippsten wrote:

> Interesting. I won't get into the "per vendor recommendation" part... But I know you can upgrade your existing nodes pretty easily from 64GB 256GB. Why not do that first? Before adding new nodes and thus eliminating the two pool scenario?

Saker, wouldn't the cluster split into two pools as soon as
finds nodes with different memory?

@Kenneth
In either case, the cluster should end up with a single pool.
At that time, check wether the pool name is
referred to correctly in all settings (do you really
have SmartPools licensed and filepool policies configured?)

Usually a cluster is smart enough to start the right
job at the right time after hw changes. This includes
canceling a job and starting over multiple times
when a series of changes occurs. In case a job survives
a hw change, then assume it is meant to be so -- still if
you are uncomfortable with this, cancel and restart it by hand.

Cheers

-- Peter

Saker Klippsten

unread,
Dec 9, 2016, 6:54:05 AM12/9/16
to isilon-u...@googlegroups.com
Not from my similar experience. I upgraded a 10 node s200 cluster personally over chat support doing one node at a time. From 48 GB to 96GB . Install unique serialized CTO tar package per node which sales emailed me , shutdown node, install ram, turn back on , run CTO report , uploads report. Rinse repeat till all nodes were complete. I waited 60 between each node ensuring it was joined back to the cluster. I think memory upgrades might be slightly different than drive size differences.

Then we added 4 new s200 nodes already configured with 96GB ram. To the existing cluster. Maybe protocol has changed with the newer platform...

-s


Sent from my iPhone

Peter Serocka

unread,
Dec 9, 2016, 7:51:10 AM12/9/16
to isilon-u...@googlegroups.com
Yeah, in the times of 6.5 it was even possible to mix different
disk capacities in one pool… not to be sold as such,
but it once came to the rescue in a certain situation.

To wrap it up, I too think it’s definitely cleaner
to upgrade RAM first, then add new nodes.
I’d just double-check in advance how
7.2? or 8.0? will handle RAM differences (@Kenneth).

Cheers

— Peter

Dan Pritts

unread,
Dec 9, 2016, 10:08:41 AM12/9/16
to isilon-u...@googlegroups.com
FWIW when we added 10G cards to a couple X200's recently, we had to have support come out and install the cards.  And presumably do the CTO business.  So, watch out for that.

The party line was that we weren't allowed to crack the case, they had to do it.

December 9, 2016 at 6:54 AM
Not from my similar experience. I upgraded a 10 node s200 cluster personally over chat support doing one node at a time. From 48 GB to 96GB . Install unique serialized CTO tar package per node which sales emailed me , shutdown node, install ram, turn back on , run CTO report , uploads report. Rinse repeat till all nodes were complete. I waited 60 between each node ensuring it was joined back to the cluster. I think memory upgrades might be slightly different than drive size differences.

Then we added 4 new s200 nodes already configured with 96GB ram. To the existing cluster. Maybe protocol has changed with the newer platform...

-s


Sent from my iPhone


December 9, 2016 at 2:17 AM
December 8, 2016 at 12:51 PM
Interesting. I won't get into the "per vendor recommendation" part... But I know you can upgrade your existing nodes pretty easily from 64GB 256GB. Why not do that first? Before adding new nodes and thus eliminating the two pool scenario?

It's  a change in the config file via a download they give you. Shutdown node. Add memory . Turn on node run script CTO config again.. this uploads a file to Isilon to update their records. Done.

Then add the new Nodes and let auto balance do its thing..

-s





Sent from my iPhone

On Dec 8, 2016, at 9:35 AM, Kenneth Van Kley <kjv...@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
December 8, 2016 at 12:35 PM
We have an existing x410 cluster running 7.2.0.5.  Each node has 64-gig of memory.

Per vendor recommendation, we'll be adding 8 new nodes (each with 256-gig of memory) and then upgrading the existing 8 nodes from 64-gig to 256-gig each.

Since smartpools will create a new pool for the new config, what happens to the data on the old nodes after we upgrade?  Does it have to re-protect the data and move it all around or does it just leave it where it is and change what pool it's part of?

Does the old pool just disappear once we've upgraded everything?


--
You received this message because you are subscribed to the Google Groups "Isilon Technical User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isilon-user-gr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan 

Reply all
Reply to author
Forward
0 new messages