Re: Index Of Dmg Ntfs For Mac

2 views
Skip to first unread message
Message has been deleted

Alfonzo Liebenstein

unread,
Jul 14, 2024, 12:30:50 PM7/14/24
to wienestican

I am using Synapse + Unity Dash for searching and it seems that neither of these can index folders I've accessed in my NTFS partition. I believe this is because Zeitgeist does not index folders, and locate (which Synapse uses, I believe) does not touch my NTFS drive.

Index Of Dmg Ntfs For Mac


Download https://ssurll.com/2yXZWI



how does the computer retrive a particular entry in the MFT table for a file or directory?I read through many documents which describe the structure of NTFS and MFT, but i fail to understand say i have a file in E:\documents\test.txt, how can i identify it's entry in the MFT index. Is it sequential?

I have a folder in NTFS that contains tens of thousands of files. I've deleted all files in that folder, save 1. I ran contig.exe to defragment that folder so now it's in 1 fragment only. However, the size of that folder is still 8MB in size. This implies that there's a lot of gap in the index. Why is that? If I delete that one file, the size of the index automatically goes to zero. My guess is because it gets collapsed into the MFT. Is there any way to get NTFS to truly defragment the index file by defragmenting it based on the content of the file? Any API that you're aware of? Contig.exe only defragment the physical file.

There is slack in the index, but not a gap. I make the distinction to imply that there is technically wasted space, but it's not like NTFS has to parse the 8MB in order to enumerate/query/whatever the index. It knows where the root of its tree is, and it just happens to have a lot of extra allocation leftover. Probably too detailed a response, given how unhelpful it is.

The author provided some otherwise undocumented information about file index fragmentation, which he received from Microsoft Tech Support during an incident. The short version is, DEFRAG does not defragment the folder index, only the files in that folder. If you want to defragment the file index, you have to use SysInternals' CONTIG tool, which is now owned and distributed (free) by Microsoft. The answer gives a link to CONTIG.

I have a relatively new hard disk and it has been working fine all this while. Today, however, on starting Windows a few of my applications failed to load some DLLs. Windows ran chkdsk upon restart and reported a dozen of index issues which it fixed successfully.

My external HDD became problematic. I wanted to get rid of some unused data, and found two folders were not accessible under Windows because they were corrupted and unreadable. These folders had no problem opening under OS X and Linux. I could read all the content also via GetDataBack for NTFS. chkdsk was getting stuck always at the same point, step 2, 'Correcting errors in index $I30 for the file n'. What is strange about this is that once the program gets to this point, the HDD stops its activity, its LED stops blinking at first then completely turns off. I also left it running overnight but already knew what the result would be..
HDD Regenerator had no effect on the drive.I also run ntfsfix under Linux but it obviously did not help.So I copied what I needed onto my main drive and deleted the two folders under Linux.
Now I am here because the HDD seems fine, HD Tune returns a completely green grid under 'Error Scan' and a pile of 'ok' under 'Healt', but chkdsk keeps getting stuck at the same exact point, and in read-only mode (no parameters) a few of the files that I deleted under Linux are being mentioned, saying something like
'The entry tot. in the index $I30 of the file n is not correct.',
plus some
'Error in the index $I30 of the file n.'.
What I want to do is defrag it, but I'm afraid it can get much worse :/
Could this be a hardware problem? Should I worry for my data? As of now, I don't have an other place to backup my files, so I can't format the drive.
Thank you for the help :D

No one have suggestions? I continue searching but find no similar situation! Should I just go on with the defrag - the drive has some high fragmentation rate - and forget about the error or is it a serious issue for my data?
I'm afraid that using the drive, some data can be overwritten because of bad indexes.
Will reformatting the drive help?
Thank you :D

Looking at the characteristics of ReFS is seems to be a perfect match to be used as backup target. It also supports sparse file and the recommended block sizes so it should be able to facilitate DDB, index and data.

Many popular file systems such as FAT and Unix store directory information as a simple flat file. Recognizing efficiency issues with lookups within large flat files, NTFS employed B-tree indexing for several of its building blocks, providing efficient storage of large data sets and very fast lookups. As forensic examiners, we can take advantage of the NTFS B-tree implementation as another source to identify files that once existed in a given directory.

Similar to Master File Table (MFT) entries in NTFS, index entries within the B-tree are not completely removed when file deletion occurs. Instead, they are marked as deleted using a corresponding $BITMAP attribute. Additionally, the size of index nodes can vary, particularly for large filenames, providing a type of slack that can hold previously existing filenames. Since B-tree nodes are regularly shuffled to keep the tree balanced, file name remnants are scattered and it is a common occurrence to find duplicate nodes referencing the same file. Of course, the flip side of re-balancing a B-tree is that it often results in data within unallocated nodes being overwritten. Thus while we commonly find evidence of long lost files within $I30 attributes, there is no guarantee they will be present.

Interestingly, NTFS directory index entries utilize a $FILE_NAME attribute type to store file information within the index. You may recall that this is the same attribute employed by the MFT and hence it provides a treasure trove of information about the file:

A few examples can better illustrate how useful these entries can be. I recently had a case where it appeared a large number of files were moved to the Recycle Bin, which was subsequently emptied and most of the corresponding INFO2 file was reallocated. The $I30 file still contained information on many of those files (albeit renamed according to the Recycle Bin schema). By analyzing the MFT Change Times of the $I30 index entries, I was able to determine when the user placed each file within the Recycle Bin, and collect a list of what types of files were "recycled" using their file extensions.

Evidence may still be found in Index Attributes even if wiping or anti-forensics software has been employed. Figure 1 shows the parsed output for a $I30 file from the Windows directory. Two deleted index entries have been highlighted. In this example, a file named fgdump.exe was overwritten using a software tool named BCWipe. The original filename was overwritten with random characters (sqhyoeop.roy) and the Modified, Accessed, and Created time stamps were set to fictitious values. Since MFT Change Times cannot be directly modified via the Windows API, that timestamp still accurately reflects when the wipe occurred. Of course the interesting part of this example is that evidence of both the original file and the wiping artifacts are contained in the slack of the $I30 file.

One of the primary reasons many examiners don't utilize index attribute files is because getting access to them is not always intuitive. I congratulate Access Data and their Forensic Toolkit (FTK) for clearly identifying $I30 indexes for as long as I can remember. Figure 2 shows what they look like in FTK. Simply right-click on the $I30 file to export from the image.

To identify index attributes in EnCase, an EnScript is required. An Enscript ships within the stock Examples folder and is named, "Index buffer reader". This script can be pointed at a specific directory, a collection of tagged directories, or the entire file system. The results are nicely bookmarked and the entries are parsed within each bookmark's comments field. To export the $I30 file in EnCase, you first select the "Index Buffer" that you are interested in within the Tree Pane, select all within the View Pane, and right-click and select Export (Figure 5).

The format of $I30 entries is well known and extensively documented. However, indexes commonly reach sizes in the hundreds of kilobytes and hold thousands of entries (theoretically they could have billions of entries). It is tiresome work to do the parsing by hand. Of the previously covered forensic suites, only EnCase has a native ability to parse the files, though the output is very difficult to use and analyze. Luckily, Willi Ballenthin recently released an open source tool that does an excellent job of parsing $I30 files [2]. It formats output as CSV, XML, or bodyfile (for inclusion into a timeline) and has a feature to search remnant space for slack entries. The tool is written in Python and sample command line follows:

The resulting file can be opened and filtered in Excel (CSV output is the default). Notice the file names, file size, and four timestamps displayed in the output shown in Figure 6. Several deleted index node entries (slack) are also displayed within the output.

aa06259810
Reply all
Reply to author
Forward
0 new messages