Defrag Error 264

3 views
Skip to first unread message

Shanta Plansinis

unread,
Aug 3, 2024, 1:16:11 PM8/3/24
to mierustelal

On a Windows 7 machine with 4 G RAM Memory LabVIEW memory usage jumps in big steps during defrag to about 375 MByte, and the machine spends a couple of minutes before finishing a 6 GByte file. However, on this machine I have not been able to create the crash.

Was careful not to defrag an invalid or already open tdms file. Have read all the posts on that issue. Finally removed the defrag from the write program, ran it separately, and crash still occured. Am able to read the big files without problems with the TDMS reader.

Set a large buffer size for each of the channels (1M). This dropped the size of the tdms index file to 1 kbyte or so vs. some 50 MByte before. But crash still occured when trying to defrag. (Again only on the 2 GByte XP machine.)

My best guess is that the defrag vi can eat memory uncontrollably, resulting in crash. Next best guess, is that I am doing something stupid. I always assume that is the best guess, but after a half day on the issue, I have moved it to #2

We also found that problem of TDMS Defragment and the fix will go with LabVIEW 2010. The problem is like what you guess, Defragment tries to allocate some memory but on some machines, it would fail and cause the crash.


In my mind, there are several workaround, for example, using defragment in LV 8.6 if the file is TDMS 1.0 version, or you can just read each channel in the file individually and then write them all to a new file.

Secondly, of course we only need to de-frag if read performance is too slow. We already know this is the case if we use smaller blocks. We will have to do some more read performance testing with large write buffers which help make very small index files.

It is sort of an unfortunate situation, TDMS 2.0 should be much faster, but because we cannot reliably de-frag, the read application may be up being slower. It may well be that we have to roll back to TDMS 1.0 and 8.6 defrag, and see if we get acceptable performance. The question then is how will that affect write performance.

From what you write, is it correct that the fix for the de-frag will not be in the February maintenance release (I assume that is already in manufacturing)? Any chance that it will be in a patch 4, or do we have to wait until NI Week?

I don't see why or how reading each channel individually and then writing them all to a new file will help. Could you please explain a bit more. Also, for large files (multi Gigabyte files) this would give a big performance hit and double disk footprint while doing this operation?

Secondly, let me explain what defragment is doing. For example, if you write data to a file for multiple times, you'll probably get TDMS file with multiple segments and headers, so that the .tdms_index file will quite large. What defragment does is to re-organize your data in the tdms file and you'll get only one segment/header and a tiny .tdms_index file.

So, after you writing the file and you can make up a VI, read the channels in the file and write the data into a new TDMS file, only one time's writing will not result in multiple segments, so the new written tdms file is well organized. If you can't write all channels altogether once to a TDMS file, you can write one channel by one channel, if there are not too many channels. Does this make sense?

I recommend you to do so if you want to write a 2.0 TDMS file and benefit from TDMS high speed streaming in LabVIEW 2009 and it's a workaround not to use defragment but also get a "defragemented" file. I'm afraid the fix would not go with 2009 SP1.

Type list disk and press Enter to get a listing of the disks on the system. (More accurately, the disks visible to diskpart.) Figure out which disk contains the partition you want to assign a drive letter to.

When you run the Disk Defragmenter (Defrag.exe) utility on a volume on a computer that's running Windows 8.1 or Windows Server 2012 R2, the defrag operation fails. Additionally, event ID 257 is logged in the Application log. This event displays a "The parameter is incorrect" error message. In this case, you may be unable to optimize the volume for space efficiency.

Hi Guys!
I'm having issues when syncing to Dropbox. Currently I'm syncing one Share (about 117.7GB) to my Dropbox account.
I started the service with no problems, ReadyNAS writes in logs that sync is working, and it pushed all files to Dropbox.
After couple days I've noticed (in logs) that it did not start live sync, but keeps uploading files from beginning, overwriting (probably) same files that already are on this Dropbox account.
ReadyNAS shows no errors in its system logs or in Dropbox session History logs.
I've made some test with smaller folders with lesser files and everything was working just fine.
One thing I've noticed is: my disk schedule is set to do defragmentation ones a week, and during the sync test with smaller folders and files that have to be synced to Dropbox, after defragmentation, all files was pushed to Dropbox again from the beginning and overriding existing ones - wchich does not make any sense.
This is very anoying that ReadyNAS is cheching file changes on cloud account (or at least it says it is doing that) before starting to sync, but after all it is syncing all files overriding exactly same files that are already on Dropbox. I have to wait quite long to get my files to be updated on coud storage, and I'm having affected internet connection all the time (despite the lowering the QoS for ReadyNAS on my router).
My question is: Do anyone had issues like that before? Should I never do the defragmentation if I want to sync my files with Dropbox account? What about disk balance, and scrubbing - it is making same issues? What is the solution?
I'll appreciate if anyone could help.
Model: ReadyNAS 102
Firmware: 6.10.6

defrag is different from the other maintenance tasks, and if you think it is linked to the drop box problem I suggest that you take it out of the weekly schedule for a while. Note there is also an autodefrag option for each share that you might also want to turn off for the share you are syncing.

Thanks for your answer!
Yes, I'm currently thinking that defrag is causing that problem.
Now I kicked it out of the schedule, but what about the performance? Should I never use it again if I want to sync my data?
Thanks for the autodefrag info, I checked, and it is off too, but did not realize that there is such an option.

Just to be clear - defragging a fragmented file will compact the file. But in the case of btrfs, defragging can reduce free space (esp. if you use snapshots). And BTRFS defrag won't compact free space. So it is a trade-off.

It's the dropbox cloud service that is deciding to resync the share, not the btrfs file system. At this point we don't know why. Personally I don't use the service, so it's not something I can diagnose. You could log in with ssh, and see if you can compare the file and folder attributes before/after a defrag. That might give you some clues.

I have the processors unpackContent -> MergeContent. I use this to untar a file and then zip the files. I am using the defragment merge strategy and have been noticing that when MergeContent has to handle many flowfiles at once (flowfile queue builds up before MergeContent) from many different fragments I get "Expected number of fragments is X but only getting Y".

It was about 20 tar files, which turned into almost 1000 individual files that I was looking to ZIP back to 20 files. Looks like the major problem was the bin #. It was set to 1, once I increased that it had no problem with the multiple tar files that were queued up.

I only had 1 concurrent tasks so I was surprised that even with 1 bin, it would look to create a new bin. For selected prioritizers it was the default " first in first out", so if its untaring one tar file at a time it should finish a whole bin before moving to the next one.

It's because something (often Windows Defender) has re-opened the files between you analyzing and actually clicking the defrag button.
You can't defrag files that are open.
The longer it is between analyzing and defragging the more likely it is that you may see the message.

Do another analyze and it will not find as many fragmented files, if it finds any.
Any that are now still open will not be listed, but you may see the few 'aborted' ones again if they have been closed again, so defrag them again.

The problem ther is that at 90% capacity used that disk is pretty full, - You have run out of free space required for a defragment to work.
Even the built in Windows defragmenter wont work with a HDD drive that is that full.

A lot of what is listed there seem to be 'Features-on-Demand' packages, are you actually using those features? If not you can remove them.
-us/windows-hardware/manufacture/desktop/features-on-demand-v2--capabilities

The better, fully supported, no downtime method is to create a new mailbox database, and move all the mailboxes to the new database. Exchange 2010 supports moving mailboxes online - so there is no downtime, you can do the moves during the middle of the workday.

If some of the mailboxes are corrupt, it will skip those and let you deal with them when the process is complete - either deleting the mailboxes, or skipping the damaged items and moving the good information over.

Many assume the answer is to perform an offline defragmentation of the database using ESEUTIL. However, that's not our recommendation. When you perform an offline defragmentation you create an entirely brand new database and the operations performed to create this new database are not logged in transaction logs. The new database also has a new database signature, which means that you invalidate the database copies associated with this database.

As everyone is suggesting, defragmentation is not recommended or necessary. The best way to create a new mailbox database, and move all the mailboxes to the new database. Here is detailed guide for claiming whitespace.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages