Warning during spIsoNet anisotropy correction

155 views
Skip to first unread message

Dong-Hua Chen

unread,
Apr 12, 2024, 8:02:53 PM4/12/24
to spis...@googlegroups.com
Hi Yun-Tao,

I did the anisotropy correction for the test maps (emd_8731) according to your tutorial and got the warnings as follows.
When I compared the corrected half maps to the original half maps in Chimera, I could not see much improvement. Is the warning causing the issue (non-improvement)? 

Best regards,
Dong-Hua


----------
spisonet.py reconstruct emd_8731_half_map_1.mrc emd_8731_half_map_2.mrc --aniso_file FSC3D.mrc --mask emd_8731_msk_1.mrc --limit_res 3.5 --epochs 30 --alpha 1 --beta 0.5 --output_dir isonet_maps --gpuID 0 --acc_batches 2 

04-12 10:54:31, INFO     voxel_size 1.309999942779541 
04-12 10:54:31, INFO     spIsoNet correction until resolution 3.5A!                      
Information beyond 3.5A remains unchanged 
04-12 10:54:53, INFO     Start preparing subvolumes! 
04-12 10:55:29, INFO     Done preparing subvolumes! 
04-12 10:55:29, INFO     Start training! 
04-12 10:55:38, INFO     Port number: 42105 learning rate 0.0003 ['isonet_maps/emd_8731_half_map_1_data', 'isonet_maps/emd_8731_half_map_2_data']   0%|                                                                    | 0/250 [00:00<?, ?batch/s][rank0]:[2024-04-12 10:55:50,625] [0/0] torch._dynamo.variables.torch: [WARNING] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored [rank0]:[2024-04-12 10:56:12,778] [0/1] torch._dynamo.variables.torch: [WARNING] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored 100%|██████████████████████████████████████████████| 250/250 [04:38<00:00,  1.12s/batch, Loss=0.452] Epoch [1/30], Train Loss: 0.5041 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.466] Epoch [2/30], Train Loss: 0.4434 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.442] Epoch [3/30], Train Loss: 0.4304 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.421] Epoch [4/30], Train Loss: 0.4219 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.388] Epoch [5/30], Train Loss: 0.4125 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.404] Epoch [6/30], Train Loss: 0.3997 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.432] Epoch [7/30], Train Loss: 0.3877 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.415] Epoch [8/30], Train Loss: 0.3798 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.391] Epoch [9/30], Train Loss: 0.3740 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.411] Epoch [10/30], Train Loss: 0.3705 100%|███████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.36] Epoch [11/30], Train Loss: 0.3674 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.398] Epoch [12/30], Train Loss: 0.3636 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.393] Epoch [13/30], Train Loss: 0.3621 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.317] Epoch [14/30], Train Loss: 0.3612 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.353] Epoch [15/30], Train Loss: 0.3586 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.338] Epoch [16/30], Train Loss: 0.3565 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.385] Epoch [17/30], Train Loss: 0.3558 100%|███████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.33] Epoch [18/30], Train Loss: 0.3551 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.359] Epoch [19/30], Train Loss: 0.3544 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.311] Epoch [20/30], Train Loss: 0.3538 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.358] Epoch [21/30], Train Loss: 0.3532 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.355] Epoch [22/30], Train Loss: 0.3515 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.358] Epoch [23/30], Train Loss: 0.3504 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.362] Epoch [24/30], Train Loss: 0.3503 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.352] Epoch [25/30], Train Loss: 0.3503 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.403] Epoch [26/30], Train Loss: 0.3491 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.387] Epoch [27/30], Train Loss: 0.3478 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.356] Epoch [28/30], Train Loss: 0.3480 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.354] Epoch [29/30], Train Loss: 0.3474 100%|██████████████████████████████████████████████| 250/250 [03:57<00:00,  1.05batch/s, Loss=0.269] Epoch [30/30], Train Loss: 0.3473 04-12 12:55:36, INFO     Start predicting! data_shape torch.Size([125, 1, 80, 80, 80]) 100%|█████████████████████████████████████████████████████████████| 125/125 [00:06<00:00, 17.87it/s] size restored (334, 334, 334) data_shape torch.Size([125, 1, 80, 80, 80]) 100%|█████████████████████████████████████████████████████████████| 125/125 [00:06<00:00, 19.54it/s] size restored (334, 334, 334) 
04-12 12:55:58, INFO     Done predicting 
04-12 12:55:58, INFO     combining 
04-12 12:55:58, INFO     voxel_size 1.309999942779541 
04-12 12:56:46, INFO     voxel_size 1.309999942779541 
04-12 12:57:34, INFO     Finished

Yuntao Liu

unread,
Apr 14, 2024, 4:01:42 PM4/14/24
to Dong-Hua Chen, spis...@googlegroups.com
Hi Dong-Hua,

This error should not affect the final results. 
To compare the before and after correction, I suggest you try visualizing maps after post processing, or use the XYZ view in 3dmod for halfmaps, see the attached image as example.


image.png
Yuntao Liu,  Postdoc.

California NanoSystem Institute
University of California Los Angeles


--
You received this message because you are subscribed to the Google Groups "spIsoNet" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spisonet+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/spisonet/CAHD%3DR0sAMB6vNpq0fVWGHy6k046bAyJyZaScZypSWmaqpURfeg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages