Storage node decommission

374 views
Skip to first unread message

Jan Behrend

unread,
May 16, 2017, 5:46:23 PM5/16/17
to fhgfs...@googlegroups.com
Hello list,

what is the correct way to decommission a storage node?

"beegfs-ctl --migrate" will do the job, but "no modificatoin during the
process" is not really feasable for us in a production environment.

Any strategy I can manage a complete storage node life cycle during
normal BeeGFS usage?

Thanks in advance!
Jan

--
MAX-PLANCK-INSTITUT fuer Radioastronomie
Jan Behrend - Rechenzentrum
----------------------------------------
Auf dem Huegel 69, D-53121 Bonn
Tel: +49 (228) 525 359, Fax: +49 (228) 525 229
http://www.mpifr-bonn.mpg.de



bourd...@googlemail.com

unread,
May 30, 2017, 9:05:39 AM5/30/17
to beegfs-user, jbeh...@mpifr-bonn.mpg.de
Hello Jan,

I can't speak for the BeeGFS team, but as a user I would propose you to make a list of "preferred" storage targets that contains all storage targets from all nodes but does not include the node that you want to empty and remove (see "tunePreferredStorageFile" option in "beegfs-client.conf"). Then, you simply repeat the process of target migration in case new files (or stripes/chunks) have been added meanwhile. Once the storage target is emtpy, just be fast enough to remove it... ;-)

Otherwise, I would propose to the developers to introduce a new target-ID list that marks targets as read-only, e.g. "tuneReadOnlyStorageFile".

Thanks and best greetings,
Ph. Bourdin.

Jan Behrend

unread,
Jun 1, 2017, 5:23:55 AM6/1/17
to fhgfs...@googlegroups.com
On Tue, 2017-05-30 at 06:05 -0700, bourdin.kis via beegfs-user wrote:
> I can't speak for the BeeGFS team, but as a user I would propose you to make a
> list of "preferred" storage targets that contains all storage targets from all
> nodes but does not include the node that you want to empty and remove (see
> "tunePreferredStorageFile" option in "beegfs-client.conf"). Then, you simply
> repeat the process of target migration in case new files (or stripes/chunks)
> have been added meanwhile. Once the storage target is emtpy, just be fast
> enough to remove it... ;-)

I think this is too dangerous since the "tunePreferredStorageFile" is only a
"suggestion" for beeGFS not to use the target.  When it needs the target badly
it'll write to it anyway and the manual clearly states that this could result it
data loss/corruption ...  Like Hermes said: "You don't want that."

> Otherwise, I would propose to the developers to introduce a new target-ID list
> that marks targets as read-only, e.g. "tuneReadOnlyStorageFile".

@developers :)  Is this an option?

Any other feasable way to do this with the existing code?

Thanks in advance!
Cheers Jan

--
Anfragen bitte an it-su...@mpifr-bonn.mpg.de
Please send requests to it-su...@mpifr-bonn.mpg.de
----------------------------------------

Steffen Grunewald

unread,
Jun 1, 2017, 6:13:18 AM6/1/17
to fhgfs...@googlegroups.com
On Thu, 2017-06-01 at 11:23:51 +0200, Jan Behrend wrote:
> On Tue, 2017-05-30 at 06:05 -0700, bourdin.kis via beegfs-user wrote:
> > I can't speak for the BeeGFS team, but as a user I would propose you to make a
> > list of "preferred" storage targets that contains all storage targets from all
> > nodes but does not include the node that you want to empty and remove (see
> > "tunePreferredStorageFile" option in "beegfs-client.conf"). Then, you simply
> > repeat the process of target migration in case new files (or stripes/chunks)
> > have been added meanwhile. Once the storage target is emtpy, just be fast
> > enough to remove it... ;-)
>
> I think this is too dangerous since the "tunePreferredStorageFile" is only a
> "suggestion" for beeGFS not to use the target.  When it needs the target badly
> it'll write to it anyway and the manual clearly states that this could result it
> data loss/corruption ...  Like Hermes said: "You don't want that."

Have you considered "echo 0 > free_space.override" on the discouraged target(s)?
This will set the target to read-only for new object creation, but will still
allow deletion. I think this is close to what you need?

> > Otherwise, I would propose to the developers to introduce a new target-ID list
> > that marks targets as read-only, e.g. "tuneReadOnlyStorageFile".
>
> @developers :)  Is this an option?
>
> Any other feasable way to do this with the existing code?

- S

--
Steffen Grunewald, Cluster Administrator
Max Planck Institute for Gravitational Physics (Albert Einstein Institute)
Am Mühlenberg 1
D-14476 Potsdam-Golm
Germany
~~~
Fon: +49-331-567 7274
Fax: +49-331-567 7298
Mail: steffen.grunewald(at)aei.mpg.de
~~~

bourd...@googlemail.com

unread,
Jun 1, 2017, 9:49:42 AM6/1/17
to beegfs-user
Hello,

doing a "echo 0 > free_space.override" on each storage target directory would not allow to *savely* remove a node, because existing files still may receive new chunks. What we need is a "save target/node removal". Because this requires the metadata server to re-distribute a striping target to another storage server, something like this needs to be implemented:

beegfs-ctl --decomission --nodeid ## [--nomirrors] [--verbose] /mnt/beegfs
beegfs-ctl --decomission --targetid ## [--nomirrors] [--verbose] /mnt/beegfs

This would include:
1) setting a node/target to read-only, e.g. with "beegfs-ctl --readonly --nodeid ##"
2) redistribute affected striping targets for existing objects on that node/target
3) beegfs-ctl --migrate
4) check if migration was sucessfull and node/target is really "empty"
5) beegfs-ctl --remove node/target

Once a redistribution of specific striping targets is implemented on the metadata server, a solution for the problem described in my thread from 25 May ("need more flexible striping") is within reach.

Thank you and best greetings,
Philippe.

Cao Deng

unread,
Dec 12, 2023, 9:46:21 AM12/12/23
to beegfs-user
Hello,

By doing 'echo 0 > free_space.override'  on the storage target that will be migrated, we can stop to create new files to these targets but can create them on available targets.
So I think this could make the beegfs online during migrating.

Is thereany body have tried this? 
I want to do that in my product environment.

Thanks a lot!
Reply all
Reply to author
Forward
0 new messages