rsc_fpops_est / GPU

10 views
Skip to first unread message

rebi...@web.de

unread,
Aug 9, 2025, 3:36:23 PMAug 9
to boinc_projects
Hi,
 
we have a discussion about this value in WU input_template for GPUs. The estimated runtime in BOINC is too high or too low
 
rsc_fpops_est / FLOPs
 
For CPU all is correct with this formula but for GPU it is totally wrong.
 
@David: DCF could be solution but only for one subproject (GPU). We are using fixed credits with the old credit system, quorum 1 without DCF.
 
-Reb
 
 

David P. Anderson

unread,
Aug 10, 2025, 3:59:59 PMAug 10
to rebi...@web.de, boinc_projects
Where in the code are you talking about (client or server)?

In calculations of that sort, FLOPS is different for CPU and GPU app versions.
If you're using plan classes for your GPU apps,
the FLOPS should be estimated correctly.

--
You received this message because you are subscribed to the Google Groups "boinc_projects" group.
To unsubscribe from this group and stop receiving emails from it, send an email to boinc_project...@ssl.berkeley.edu.
To view this discussion visit https://groups.google.com/a/ssl.berkeley.edu/d/msgid/boinc_projects/0C09B4DD29CB417283F05D1D15A09403%40Testserver.

rebi...@web.de

unread,
Aug 10, 2025, 4:23:52 PMAug 10
to David P. Anderson, boinc_projects
Hello David,
 
both, client and server
 
for server started with entry in input_template
<rsc_fpops_est>8e11</rsc_fpops_est>
 
We have a plan_class for AMD, Intel ARC and Nvidia apps for linux and windows.
 
But how can recognise the client to use the correct estimated runtime? The plan_class has a <gpu_peak_flops_scale> value. Generally what do we need to change or add?
 
-Reb
Reply all
Reply to author
Forward
0 new messages