I've been using affuse to "reassemble" split raw images and have come
across a strange issue (possible bug?). Note that I am doing this
testing with afflib 3.6.2 on Ubuntu 9.10 x86_64, kernel v2.6.31-22,
fuse v2.7.4.
My test case is a small (30GB) dd image from a WinXP host that I've
manually split into 2GB chunks:
elk# ls -l WinXP-VM.raw WinXP-VM-split/
-rwxrwxrwx 1 hal hal 32212254720 2009-09-27 13:17 WinXP-VM.raw
WinXP-VM-split/:
total 31457340
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:13 x.000
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:15 x.001
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:17 x.002
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:18 x.003
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:19 x.004
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:21 x.005
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:22 x.006
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:24 x.007
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:25 x.008
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:27 x.009
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:28 x.010
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:29 x.011
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:31 x.012
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:32 x.013
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 10:34 x.014
Now I use affuse to create a combined image:
elk# cd WinXP-VM-split/
elk# affuse x.000 /mnt/combine/
elk# ls -l /mnt/combine/
total 0
-r--r--r-- 1 root root 32195477504 1969-12-31 16:00 x.000.raw
You will note that the combined image created by affuse is displaying
as being 16777216 bytes smaller than the original (unsplit) image file.
And when we go to mount the image, we have a problem because the image
appears to be shorted:
elk# mmls -t dos /mnt/combine/x.000.raw
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
01: ----- 0000000000 0000000062 0000000063 Unallocated
02: 00:00 0000000063 0062894474 0062894412 NTFS (0x07)
elk# mount -o ro,loop,show_sys_files,offset=32256 /mnt/combine/x.000.raw /mnt/test
Failed to read last sector (62894411): Invalid argument
HINTS: Either the volume is a RAID/LDM but it wasn't setup yet,
or it was not setup correctly (e.g. by not using mdadm --build ...),
or a wrong device is tried to be mounted,
or the partition table is corrupt (partition is smaller than NTFS),
or the NTFS boot sector is corrupt (NTFS size is not valid).
Failed to mount '/dev/loop0': Invalid argument
The device '/dev/loop0' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
I have tested this with another image and see the same behavior--
the combined image file created by affuse is consistently 16777216
bytes short:
elk# ls -l freebsd.img split/
-rw-r--r-- 1 hal hal 17179869184 2010-01-25 04:22 freebsd.img
split/:
total 16777248
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:00 freebsd.000
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:02 freebsd.001
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:03 freebsd.002
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:04 freebsd.003
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:06 freebsd.004
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:07 freebsd.005
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:09 freebsd.006
-rw-r--r-- 1 hal hal 2147483648 2010-10-04 13:10 freebsd.007
elk# cd split/
elk# affuse freebsd.000 /mnt/combine/
elk# ls -l /mnt/combine/freebsd.000.raw
-r--r--r-- 1 root root 17163091968 1969-12-31 16:00 /mnt/combine/freebsd.000.raw
Thoughts? Is this a bug or am I doing something stupid? My Google-fu
is coming up empty.
Thanks in advance for your time and attention.
Hal Pomeranz
Simson
> --
> You received this message because you are subscribed to the Google Groups "aff-discuss" group.
> To post to this group, send email to aff-d...@googlegroups.com.
> To unsubscribe from this group, send email to aff-discuss...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/aff-discuss?hl=en.
>
This is actually a pure "split raw" image and not in AFD format, but
let me show you some interesting results that I've obtained:
elk# mmls -i split -t dos x.*
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
01: ----- 0000000000 0000000062 0000000063 Unallocated
02: 00:00 0000000063 0062894474 0062894412 NTFS (0x07)
03: ----- 0062894475 0062914559 0000020085 Unallocated
elk# mmls -i afd -t dos x.*
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
01: ----- 0000000000 0000000062 0000000063 Unallocated
02: 00:00 0000000063 0062894474 0062894412 NTFS (0x07)
Notice that with "-i afd" we don't see the final (unallocated) slice.
Similarly I can use "blkcat -i split ..." to dump the last cluster of
the NTFS file system, but when I try it with "-i afd" I get an error:
elk# blkcat -h -i afd -o 63 x.* 7861800
Error reading image file (tsk_fs_read_block: Address missing in partial image: 7861800)) (blkcat: Error reading block at 7861800)
So, I thought that I should try converting the split raw files into
an actual AFD collection:
elk# affconvert -O.. -a .afd x.0*
convert x.000 --> ../x..afd
Writing to page 1918 with 16777216 bytes read from input...
md5: 0667c05275dda4fe7c528024a3957f8e
sha1: ce3ed9ba2cde196e69644b1b855c63133b208c85
bytes converted: 32195477504
Total pages: 1919 (1860 compressed)
Conversion finished.
This is a 30GB image (32212254720 bytes). Notice that the affconvert
program only converted 32195477504 bytes-- again the last page of
16777216 bytes seems to have been skipped. And indeed we see the same
truncated mmls output with the newly converted AFD files:
elk# cd ../x..afd/
elk# mmls -i afd -t dos file_0*
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
01: ----- 0000000000 0000000062 0000000063 Unallocated
02: 00:00 0000000063 0062894474 0062894412 NTFS (0x07)
So then I thought I'd try converting the original unsplit image
with affconvert:
elk# affconvert WinXP-VM.raw
convert WinXP-VM.raw --> WinXP-VM.aff
Converting page 0 of 1919^C # I hit Ctrl-C here to abort
There really should be 1920 16M pages being converted, not 1919.
Obviously you're starting your count from the 0th page, so 1919 might
be the right value to count to. But since the 16777216 byte deficit
seems consistent across a number of different tools that use AFFLIB,
I'm thinking there's maybe a "fencepost" type error someplace in the
AFFLIB code that's causing the last page to be skipped. I'm not
familiar with the code (yet), so I can't say exactly where this might
be happening.
Am I off-base here?
--Hal
Am I off-base here?
--Hal