Page 1 of 1

Convergence with different types of draws

Posted: 28 Aug 2020, 12:01
by cybey
Hello, everyone,

i would like to know from you if there is a rule of thumb, when to use which kind of draws (Halton, Sobol, MLHS, ...)?
During my research, severel papers recommended not to use Halton draws due to correlation with a high number of parameters. For example, the paper...

Czajkowski, Mikołaj; Budziński, Wiktor (2019): Simulation error in maximum likelihood estimation of discrete choice models. In: Journal of Choice Modelling 31, S. 73–85. DOI: 10.1016/j.jocm.2019.04.003.

... concludes that Sobol draws are usually "better" (see below) than Halton draws:

"We compare the performance of pseudo-random draws with three quasi Monte Carlo methods (Halton, Sobol and modified Latin hypercube sampling) under 27 experimental conditions that differ with respect to experimental design, number of individuals and number of choice tasks per individual. Based on a Monte Carlo simulation using 100 to 1,000,000 draws, we can compare the relative efficiency of different types of draws. We consistently find that a scrambled Sobol sequence performs the best in terms of the lowest simulation error, while being matched by scrambled Halton draws in the case of 10 attributes."

Now, in some model estimations (e.g. MIXL or ICLV), I have found that a very high number of draws increases the probability that the model will not converge or that I do get NAs for the standard errors, regardless of which draws are used. Furthermore, I have found that Sobol draws lead to NAs in the standard errors already with fewer draws than Halton draws. For example, the estimation of an ICLV with 500 Halton draws still works, whereas 500 Sobol draws result in NAs.

Do you have any advice/experience?
I look forward to your answers! :-)

Re: Convergence with different types of draws

Posted: 07 Sep 2020, 11:06
by stephanehess
Hi

I wouldn't go as far as objectively stating that one type of draws is better than another, but you can say the following:
- with just one random parameter, MLHS offers the most uniform coverage of the space of integration
- Haltons are very good with low numbers of dimensions, say up to 5 random parameters, but not above

In relation to your point about non-convergence, basically, more draws is ALWAYS better, in that it gets the numerical approximation closer to the model you actually wish to estimate. With low numbers of draws, you can sometimes get results for a model that is in fact not identified. So the fact that you are seeing problems with higher numbers of draws probably shows a problem with your model

Best wishes

Stephane