I have a SUSE SLES 10.1 SP1 x86_64 system
I had a power loss and had some file root system corruption (based on
running fsck in display only / read only mode).
I set the system to do a fsck at reboot by doing BOTH (which I suspect doing
both is the root of my problem) - no flames please :-)
1) at the root directory I did a touch /forcefsk
and also a
2) shutdown -rF now
what is happening now - the system performs a fsck, reboots, performs fsck,
reboots, performs fsck, reboots ....... I am stuck in this loop and cannot
figure out how to break out.
This happens in FAILSAFE mode as well as normal mode.
When I try CNTRL-C during the fsck - as it just throws up and says error in
service module.
I have tried escaping out of GRB GUI and modifying boot options to include
fastboot - but that did not work.
Any ideas / suggestions?
Thanks
I am working on that - I have a problem with getting the RAID drivers on a
boot CD because of the way it was installed ...
I am trying SystemRescueCD ver 1.1.6. (http://www.sysresccd.org) I have also
tried the install CD. The problem I am getting is neither CD seems to
recognize the disks.
and when I try: mount -t ext3 /dev/sda2 /tmpmnt ... I get
mount: wrong fs type, bad option, bad superblock on /dev/sda2, missing
codepage or helper program, or other error. In some cases useful info is
found in syslog - try dmesg | tail or so.
dmesg shows /dev/sda2: read fail after 0 of 512 at 1447048708096:
Input/output error
it shows the same type of error for sda1 (/boot), sda2 (/), and sda3 (swap)
> and when I try: mount -t ext3 /dev/sda2 /tmpmnt ... I get
Isn't Reiserfs the default filesystem for Suse?
yes - but I changed it - no particular reason ...
> and when I try: mount -t ext3 /dev/sda2 /tmpmnt ... I get
>
> mount: wrong fs type, bad option, bad superblock on /dev/sda2, missing
> codepage or helper program, or other error. In some cases useful info
> is found in syslog - try dmesg | tail or so.
Does /tmp/mnt exist for a mount point? Try not specifying the -t
option, as it should detect the fs type (or be sure it's ext3).
What does fdisk -l /dev/sda show? Is /dev/sda2 set for read only
in /etc/fstab? Is it a valid partition? How about the RAID setup that
you said is giving you problems because of how you set it up? How do
you mean? Also, what does lvscan --version show?
When using a rescue disk - I can mount to any dir - I only used /tmpmnt as
an example.
I tried using with -t and without -t ... I still get the same error.
So here is the full story --- I have a server that I use as a sandbox. I
was running SUSE 10 SP1 64 bit, and running SAP / Oracle on it.
This is an Intel server (S5000VSA motherboard, chasis, cpu, etc. that I hand
built) and I used the Intel Deployment CD .. which creates and executes
custom install for you with all the requried drivers. Idiot Proof.
I was lazy and set up huge root directory (1.5 TB) ... using five 250 GB
disks and RAID 0. Yesterday I had a power outage - and after the system came
up - the / file system had errors. So I did what I did (as outlined in the
original post) to scan the / filesystem. Now the server is stuck in the
infinite loop I outlined.
I tried booting from the SUSE CD, trying to specify the Intel MegaSR RAID
drivers so the system can see the partions, so I can mount it and delete the
/forcefsk file or issue a shutdown -rf now, but for whatever reason I cannot
get the SUSE boot CD / Rescue mode to load the drivers. tried USB stick and
I burned them to CD. I specifiy the direct path but no success. I also
tried SystemRescueCD but I cannot seem to load the Intel MegaSR RAID drivers
either.
I tried working on the system this evening - now when I boot in SUSE
failsafe mode using the CD - it shows I/O errors on my / partion.
I gave up and now in the process of installing RedHat ES 4.0 Update 5
x86_64. This is a sandbox, and it serves me right for not having a backup
and setting up everything as RAID 0.
I am letting a co-worker use this system a learning system ... so no real
work is lost (she will just have to re-install SAP and Oracle).
In case the disk itself isn't bad, maybe use the rescue CD again, enable
networking and download the kernel module for the drivers you need and
use lkm to load it and see if you can get the raid partitions back up?
It's no wonder you can't mount sda3 -- swap partitions have no
filesystem. Try "dd if=/dev/sda1 of=/dev/null bs=1k count=1" just to make
sure you can read from the disk. If you can't, it sounds like either the
disk hardware or the motherboard hardware or maybe something simple like
the data cable's partway unplugged, not just a scrambled filesystem. If
you can, and you expect to be able to mount it but can't (Is it mounted
already? Are you specifying the correct filesystem?), it's rather odd
that the same thing would happen to all the filesystems. FWIU reiserfs
deals really badly with damage; this is one reason I avoid it.
--
-eben QebWe...@vTerYizUonI.nOetP http://royalty.mine.nu:81
CAPRICORN: The stars say you're an exciting and wonderful person... but
you know they're lying. If I were you, I'd lock my doors and windows
and never never never never never leave my house again. -- Weird Al