Dear Eva,
I would recommend an entirely different approach with batch SPMD style of parallel programming (Wei-Chen mentioned it as his item 3.)
I assume that you have a data set on which the evaluation of funct0 takes a long time. You would like to split up the data set, evaluate the function on each piece, and then sum the local results for a final result. The following assumes that you can replicate your data in every process. The get.jid() function then subsets the data list to a different local piece on every process and allreduce() sums the local results:
my.data <- data[ get.jid( length( data ) ]
funct1 <- function( parameters ) {
res <- lapply( parameters, func0, my.data )
allreduce( sum( unlist( res ) ) )
}
result <- optim( parameters, funct1, my.data )
comm.print( result )
You put the above (and the rest of your code defining func0 and reading data) in a file and run it with mpirun as an Rscript batch process. I do not address how you get your data. There are several approaches to get data that depend on your file system available and how big is the data.
I copy your message to RBigDataProgramming for a wider audience.
George