Memory error while post-processing data in Jupyter notebook

148 views
Skip to first unread message

Diksha Prajapati

unread,
Nov 27, 2024, 11:23:20 AM11/27/24
to mumax2
Hello everyone,
I am facing serious issues, such as my PC hanging during the stacking of OVF files and further post-processing to obtain dispersion and mode profiles. The MuMax3-generated data (OVF files and their corresponding NumPy files), which I use for post-processing, is approximately 30–32 GB in size. The method I use is the one provided in Tutorial 4 (for data post-processing in Jupyter Notebook). When the code reaches the cell where stacking and FFT are performed, it displays a memory error (something like "unable to allocate 20 GiB for an array with shape (1503, 3, 1, 2400, 384) and data type float32"). My system has the following specifications:

Processor: 12th Gen Intel(R) Core(TM) i7-12700   2.10 GHz
RAM: 64GB
Windows 11 pro
System type: 64-bit operating system, x64-based processor
NVIDIA GeForce RTX 3050: driver version: 566.14

Has anyone else faced the same problem? If so, could someone please suggest what I could do to address this issue? Any guidance would be greatly appreciated. Thank you for your time.
With regards,
Diksha
Message has been deleted

ziyee Lu (Luziyee)

unread,
Nov 27, 2024, 8:33:14 PM11/27/24
to mumax2
  I'm truly sorry that I can't answer your question, as I'm still a beginner. Please forgive me for asking you some questions instead. In your earlier posts, I noticed that you have worked on subtracting the magnetization of a specific state during post-processing to obtain a new magnetization. Was this achieved through ovf files or via MuMax code? If it's convenient for you and you're willing to share, could you please let me know how you handled it? I would be incredibly grateful!  

Felipe Garcia

unread,
Nov 28, 2024, 4:03:29 PM11/28/24
to mum...@googlegroups.com
Dear Diksha,

You are storing 1503x3x1x2400x384. If it occupies 4 bytes per float it yields 16 Gb. So it should be 5 bytes the real size of float 32. That yields 20 Gb. In your system it seems that it is beyond the limit for the size of an array in numpy.
In the tutorial, the first thing is to select one layer of the layers and one of the components. Then, the best thing is to adapt it to take only the component that is interesting for you. 
Maybe even one can do that in the postprocessing with mumax3 convert.
It seems that anyway your system is 2d so that can not be reduced, so from this point of view you can not improve the size.

On the other hand, with your amount of data maybe it is worth trying to write a cuda code with cufft, as mumax does for the demag field. This will speed up your calculation. I have no time to explain that to you, but once it is done it is very helpful.

Best regards,
Felipe

--
You received this message because you are subscribed to the Google Groups "mumax2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mumax2+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/mumax2/7d30bad3-5570-4cee-b9b8-4c8c84f27f9dn%40googlegroups.com.

Антон Луценко

unread,
Nov 29, 2024, 3:26:36 AM11/29/24
to mumax2
Dear Diksha, your array dimensions seem a bit odd. For such analysis, you would typically import one sample (or even one layer or one line from the sample, especially if the sample is big). So your dimensions would be something like Nt × Nz × Ny × N × 3 (the last 3 being magnetization vector mx, my, mz).

Nevertheless, what you can do is to process by parts. Import only one layer of the sample instead of the whole sample. Import only one or two dynamic components of the magnetization, if the third one is static (around 1). Apply rfft to obtain smaller FFT array. Maybe reduction of the precision (float16 instead of float32) might help.

Also, you can save some space and time if you directly import the OVF files into numpy array. The script link was posted before: 

Diksha Prajapati

unread,
Nov 30, 2024, 12:12:43 PM11/30/24
to mumax2
Thank you so much for your replies. I'll try it out.
With regards,
Diksha 

Reply all
Reply to author
Forward
0 new messages