Padding in MCore refinement

50 views
Skip to first unread message

Hamidreza Rahmani

unread,
Oct 5, 2025, 5:09:27 PMOct 5
to Warp
Hi all,

I have a VLP dataset that is at Nyquist and it seems like I cannot go bin1 (apix=1.058 and diameter=360) because my RTX5000 (24GB) cannot handle the full size map.

When I rescale to 1.5A everything works, and I noticed M pads the particle to diameter*2. Is there a way to lower this padding like in relion?

If now, is this diameter used somewhere else? Can I just tell M the diameter is 200 and a smaller map?

Sorry, I know I could just try this and see what happens, but I trying to avoid conflicting M species and versions. 

Best,
Hamid

teg...@gmail.com

unread,
Oct 5, 2025, 5:22:28 PMOct 5
to Warp
Hi Hamid,

What pixel size were you able to use? The binning factor doesn't have to be an integer number. You can try lowering the pixel size in small steps.

You can also decrease the diameter for this species, but keep in mind that particles are zero-masked to this diameter during refinement. Also, unlike Relion, M doesn't use oversampling during the particle back-projection step, i.e. you'll start seeing aliasing artifacts in the outer map regions more quickly.

MCore's --cpu_memory flag might also help.

Cheers,
Dimitry

Hamidreza Rahmani

unread,
Oct 6, 2025, 11:41:42 AMOct 6
to teg...@gmail.com, Warp
Hi Demitry,

I managed to go to 1.5A (from 1.058A), using cpu_memory it is refining to 3.4A global resolution and large parts of the map are at 3A. 

After going back and forth I realized the problem is I cannot make new species with MTools to 1.058 and MTools doesnt have --cpu_memory option (because there is no refinement?).

Is my only way here to try 1.25A or anything that would fit in the GPU memory?

Best,
Hamid


--
You received this message because you are subscribed to the Google Groups "Warp" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warp-em+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/warp-em/cab817ae-1d49-47a5-8c10-09ede49b801bn%40googlegroups.com.

teg...@gmail.com

unread,
Oct 6, 2025, 3:40:00 PMOct 6
to Warp
Hi Hamid,

You don't need to create a new species to change the pixel size. You can just edit the .species file to change the value and then run an iteration of MCore without any of the refinement options enabled. This will produce a reconstruction with the updated pixel size and resample the mask.

However, at the end of MCore's refinement, it goes through similar steps as species creation in MTools, so if one of those runs out of memory, you'll have the same problem. What about values between 1.25 and 1.5? Or running the final step on a different system with more GPU memory?

Cheers,
Dimitry

Hamidreza Rahmani

unread,
Oct 6, 2025, 4:08:15 PMOct 6
to teg...@gmail.com, Warp
Oh that is cool, it is great to know I can edit the file because I was using MTools with --resample_apix so far. 

At the moment it is running on 1.25Apx, making 512x512x512 volumes. Seems like passing GPU ids explicitly helped avoid some crashes, but I will report back what the actual limit is. 

I don't have WarpTools installed on our cluster but this would be a good reason to do so! However, I am trying to avoid uploading the entire dataset. 

Best,
Hamid

Hamidreza Rahmani

unread,
Oct 13, 2025, 12:01:11 PMOct 13
to teg...@gmail.com, Warp
Update on this: my GPU could manage as low as 1.4A. However, I could not perform frame refinement with --refine_tiltmovies and I get 3.1 as the most common local resolution. I think I am going to stop here, hopefully I will have another high-res structure that is not as big and not from montaged micrographs that are 11664x11664!

Thanks for the help. 

Best,
Hamid
Reply all
Reply to author
Forward
0 new messages