The M2 function doesn't support multicore?

64 views
Skip to first unread message

Seongho Bae

unread,
Apr 23, 2014, 4:57:10 AM4/23/14
to mirt-p...@googlegroups.com
Dear all.

I have a question.

The M2 function doesn't support multicore processing?

I used the mirtCluster () function when calculating my factor models.
I have canned see 100% load of my 8 cores by mirt package when estimate IRT parameters, but I cannot see 100% load of my 8 cores by M2 function; It just used just one processor fully.

My operating system was Ubuntu 14.04, R 3.1.0, RStudio, Openblas, and Intel i7 4770K processor.

Thanks.

--
Blessings,
Seongho.

Phil Chalmers

unread,
Apr 23, 2014, 10:26:39 AM4/23/14
to Seongho Bae, mirt-package
Hi Seongho,

The M2 function is far from optimized, though I plan to make it faster once I add polytomous item support. For now the implementation is written in pure R code with lots of loops and no parallel architecture support, so it's bound to be slow at times. 

Phil


--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Phil Chalmers

unread,
May 10, 2014, 11:26:11 AM5/10/14
to mirt-p...@googlegroups.com, Seongho Bae
The M2 function has been updated to be a little more optimal for dichtomous and polytomous items, calling lower language routines to do some of the more heavy lifting for loops. However, in its current state I'm satisfied with how it's performance and am not too interested in further improving it or even including multicore architecture. The reason is that the M2 isn't limited by numerical calculation per say, it's limited by how much RAM is available and how well matrix operations can be managed. I've found that the routine will crash after about 100 items simply because the matrix objects become extremely large and dense. This should be consistent with other software that undoubtedly have the same issue, and was mentioned in a few of Alberto's articles on the topic. Cheers.

Phil 
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package+unsubscribe@googlegroups.com.

Seongho Bae

unread,
May 10, 2014, 12:49:36 PM5/10/14
to mirt-p...@googlegroups.com, Seongho Bae
Dear Phil,

I agree with you no more performance enhancement work in M2 function. I was checked and tested that what you said; It needs high memory, not CPU performance.

I'm trying to extract M2 coefficients in 102 items with 276 observations from 11 factor solution model now, it needs 5.4GB to extract them.

So, I recommend to try upgrade hardware performance to increase M2 calculate speed. I upgraded my RAM yesterday 4GB to 12GB, and I was added SSD Drive for use statistical computing only on Ubuntu Linux. Because, If RAM capacity full, Ubuntu Linux will use swap memory. SSD can make a calculation of M2 statistics stable - calculation will not crash - than using HDD.


Seongho

2014년 5월 11일 일요일 오전 12시 26분 11초 UTC+9, Phil Chalmers 님의 말:
The M2 function has been updated to be a little more optimal for dichotomous and polytomous items, calling lower language routines to do some of the more heavy lifting for loops. However, in its current state I'm satisfied with how its performance and am not too interested in further improving it or even including multicore architecture. The reason is that the M2 isn't limited by numerical calculation per say, it's limited by how much RAM is available and how well matrix operations can be managed. I've found that the routine will crash after about 100 items simply because the matrix objects become extremely large and dense. This should be consistent with other software that undoubtedly have the same issue, and was mentioned in a few of Alberto's articles on the topic. Cheers.

Phil 

On Wednesday, April 23, 2014 10:26:39 AM UTC-4, Phil Chalmers wrote:
Hi Seongho,

The M2 function is far from optimized, though I plan to make it faster once I add polytomous item support. For now the implementation is written in pure R code with lots of loops and no parallel architecture support, so it's bound to be slow at times. 

Phil
On Wed, Apr 23, 2014 at 4:57 AM, Seongho Bae <seongh...@gmail.com> wrote:
Dear all.

I have a question.

The M2 function doesn't support multicore processing?

I used the mirtCluster () function when calculating my factor models.
I have canned see 100% load of my 8 cores by mirt package when estimate IRT parameters, but I cannot see 100% load of my 8 cores by M2 function; It just used just one processor fully.

My operating system was Ubuntu 14.04, R 3.1.0, RStudio, Openblas, and Intel i7 4770K processor.

Thanks.

--
Blessings,
Seongho.

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.

Phil Chalmers

unread,
May 10, 2014, 2:59:40 PM5/10/14
to Seongho Bae, mirt-package
Hi Soengho,

On Sat, May 10, 2014 at 12:49 PM, Seongho Bae <seongh...@gmail.com> wrote:
Dear Phil,

I agree with you no more performance enhancement work in M2 function. I was checked and tested that what you said; It needs high memory, not CPU performance.

I'm trying to extract M2 coefficients in 102 items with 276 observations from 11 factor solution model now, it needs 5.4GB to extract them.

11 factors is huge here, I really can't guarantee the numerical accuracy of the model since by default only 3 quadpts are used in the integration....multidimensional models with the M2 still needs to be dealt with better, which is something I know Alberto and his colleges are working on.
 

So, I recommend to try upgrade hardware performance to increase M2 calculate speed. I upgraded my RAM yesterday 4GB to 12GB, and I was added SSD Drive for use statistical computing only on Ubuntu Linux. Because, If RAM capacity full, Ubuntu Linux will use swap memory. SSD can make a calculation of M2 statistics stable - calculation will not crash - than using HDD.

This sounds interesting in theory, but I don't think it will work without explicitly telling R to do it with something like this the big memory project. I have 24GB on my Linux box, and with 150 items and N=1000 for a unidimensional model I ran out of RAM and the R routine crashed. I don't mind adding bigmemory support for this function, but I would need to look into whether certain operations exist (like matrix inversion which is typically done in RAM, or finding the QR compliment). 

Phil
Reply all
Reply to author
Forward
0 new messages