[Lustre-discuss] One Lustre Client lost One Lustre Disk

1 view
Skip to first unread message

Ms. Megan Larko

unread,
Jul 16, 2009, 10:40:40 AM7/16/09
to Lustre User Discussion Mailing List
Good Day!

Yesterday evening around 5:30 p.m. local time, one of my lustre client
systems lost one of its two lustre disks. I was not able to remount
it, even after a reboot of the client. The mount command returns the
following message:
[root@crew01 ~]# mount /crew2
mount.lustre: mount ic-mds1@o2ib:/crew2 at /crew2 failed: Invalid argument
This may have multiple causes.
Is 'crew2' the correct filesystem name?
Are the mount options correct?
Check the syslog for more info.


And the /var/log/messages file (CentOS 5.1 system using 2.6.18-53.1.13.el5):
Jul 16 10:30:53 crew01 kernel: LustreError: 156-2: The client profile
'crew2-client' could not be read from the MGS. Does that filesystem
exist?
Jul 16 10:30:53 crew01 kernel: Lustre: client ffff810188fcfc00 umount complete
Jul 16 10:30:53 crew01 kernel: LustreError:
26240:0:(obd_mount.c:1924:lustre_fill_super()) Unable to mount (-22)

The entry in the client /etc/fstab file is unchanged from before:
ic-mds1@o2ib:/crew2 /crew2 lustre nouser_xattr,_netdev 0 0

This same client uses the /etc/fstab entry
"ic-mds1@o2ib:/crew8 /crewdat lustre nouser_xattr,_netdev 0 0"
This lustre disk is still mounted and usable:
ic-mds1@o2ib:/crew8 76T 30T 42T 42% /crewdat

What is also interesting is that other clients still have access to
the /crew2 disk, even though this one client does not.
There are no crew2 errors in the MGS/MDS system which serves both of
the lustre disks.

What has this one particular client lost that prevents it from
mounting the /crew2 disk to which the other clients still have access?

Any and all suggestions are appreciated.
megan
_______________________________________________
Lustre-discuss mailing list
Lustre-...@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Andreas Dilger

unread,
Jul 17, 2009, 1:46:06 AM7/17/09
to Ms. Megan Larko, Lustre User Discussion Mailing List

You should also check for messages on the MDS also. You can check for
the config file existing via "debugfs -c -R 'ls -l CONFIGS' /dev/{mdsdev}".

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

Reply all
Reply to author
Forward
0 new messages