GPU computing with Apollo
Posted: 13 Jun 2021, 11:35
Hi,
I'm developing an EM-based algorithm to optimize choice weights. In the code, there is a loop with two MNL models estimated on a very large dataset (about 2GB). I have set the nCores variable to maximum cores minus one on my system (7 cores). But still, the run time takes too long.
I'm wondering if estimation could be done on GPU.
Thank you
I'm developing an EM-based algorithm to optimize choice weights. In the code, there is a loop with two MNL models estimated on a very large dataset (about 2GB). I have set the nCores variable to maximum cores minus one on my system (7 cores). But still, the run time takes too long.
I'm wondering if estimation could be done on GPU.
Thank you