Firstly, thank you so much for your contribution to this tool.
I am estimating the latent class model, there are Binary choices, and I am trying to increase the classes from 2.
The beta for the choice probabilities is "Total time" and "cost", the membership variables are socio-demographic.
However, when I just use 2 classes, it was okay, but when I increase to 3, there is several errors.
1) sometimes, it shows "NaN" in standard deviation.
2) Then, I revise the membership variable, then the value of the beta is Really strange.
Can you get through the code and the results, give any comments about the problem?
Too sad I cannot go forward because even class 3 does not work.
Code: Select all
### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c("asc_2", "delta_c", "gamma_jobless_c",
"gamma_caruser_c", "gamma_commuter_c", "gamma_public_commute_c",
"gamma_shop_tottori_c", "gamma_hospital_c", "gamma_shop_many_c",
"gamma_iwami_commute_c", "gamma_elderly_c")
#apollo_fixed = c()
# ################################################################# #
#### DEFINE LATENT CLASS COMPONENTS ####
# ################################################################# #
apollo_lcPars=function(apollo_beta, apollo_inputs){
lcpars = list()
lcpars[["beta_tt"]] = list(beta_tt_a, beta_tt_b, beta_tt_c)
lcpars[["beta_tc"]] = list(beta_tc_a, beta_tc_b, beta_tc_c)
V=list()
V[["class_a"]] = delta_a + gamma_elderly_a*elderly + gamma_jobless_a*job2 + gamma_caruser_a * caruser +
gamma_commuter_a*commuter + gamma_iwami_commute_a*iwami_commute + gamma_public_commute_a*public_commute +
gamma_shop_many_a * shop_many + gamma_shop_tottori_a*shop_tottori +
gamma_hospital_a * hospital
V[["class_b"]] = delta_b + gamma_elderly_b*elderly + gamma_jobless_b*job2 + gamma_caruser_b * caruser +
gamma_commuter_b*commuter + gamma_iwami_commute_b*iwami_commute + gamma_public_commute_b*public_commute +
gamma_shop_many_b * shop_many + gamma_shop_tottori_b*shop_tottori +
gamma_hospital_b * hospital
V[["class_c"]] = delta_c + gamma_elderly_c*elderly + gamma_jobless_c*job2 + gamma_caruser_c * caruser +
gamma_commuter_c*commuter + gamma_iwami_commute_c*iwami_commute + gamma_public_commute_c*public_commute +
gamma_shop_many_c * shop_many + gamma_shop_tottori_c*shop_tottori +
gamma_hospital_c * hospital
mnl_settings = list(
alternatives = c(class_a=1, class_b=2, class_c = 3),
avail = 1, # all of the alternatives are available for every choice observation in the data
choiceVar = NA,
V = V
)
lcpars[["pi_values"]] = apollo_mnl(mnl_settings, functionality="raw")
lcpars[["pi_values"]] = apollo_firstRow(lcpars[["pi_values"]], apollo_inputs)
return(lcpars)
}
# ################################################################# #
#### GROUP AND VALIDATE INPUTS ####
# ################################################################# #
apollo_inputs = apollo_validateInputs()
# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION ####
# ################################################################# #
apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
### Attach inputs and detach after function exit
apollo_attach(apollo_beta, apollo_inputs)
on.exit(apollo_detach(apollo_beta, apollo_inputs))
### Create list of probabilities P
P = list()
### Define settings for MNL model component that are generic across classes
mnl_settings = list(
alternatives = c(alt1=2, alt2=1),
avail = list(alt1=1, alt2=1),
choiceVar = choice
)
### Loop over classes
s=1
while(s<=3){
### Compute class-specific utilities
V=list()
V[['alt1']] = asc_1 + beta_tt[[s]]*TT1 + beta_tc[[s]]*TC1
V[['alt2']] = asc_2 + beta_tt[[s]]*TT2 + beta_tc[[s]]*TC2
mnl_settings$V = V
mnl_settings$componentName = paste0("Class_",s)
### Compute within-class choice probabilities using MNL model
P[[paste0("Class_",s)]] = apollo_mnl(mnl_settings, functionality)
### Take product across observation for same individual
P[[paste0("Class_",s)]] = apollo_panelProd(P[[paste0("Class_",s)]], apollo_inputs ,functionality)
s=s+1}
### Compute latent class model probabilities
lc_settings = list(inClassProb = P, classProb=pi_values)
P[["model"]] = apollo_lc(lc_settings, apollo_inputs, functionality)
### Prepare and return outputs of function
P = apollo_prepareProb(P, apollo_inputs, functionality)
return(P)
}
# ################################################################# #
#### MODEL ESTIMATION ####
# ################################################################# #
#apollo_beta=apollo_searchStart(apollo_beta, apollo_fixed,apollo_probabilities, apollo_inputs)
#apollo_outOfSample(apollo_beta, apollo_fixed,apollo_probabilities, apollo_inputs)
### Estimate model
model = apollo_estimate(apollo_beta, apollo_fixed,
apollo_probabilities, apollo_inputs,
estimate_settings=list(writeIter=FALSE))
### Show output in screen
apollo_modelOutput(model)
Code: Select all
Estimation method : bfgs
Model diagnosis : successful convergence
Number of individuals : 1165
Number of rows in database : 4293
Number of modelled outcomes : 4293
Number of cores used : 3
Model without mixing
LL(start) : -2975.681
LL(0, whole model) : -2975.681
LL(final, whole model) : -2023.609
Rho-square (0) : Not applicable
Adj.Rho-square (0) : Not applicable
AIC : 4101.22
BIC : 4273.07
LL(0,Class_1) : -2975.681
LL(final,Class_1) : -9340.931
LL(0,Class_2) : -2975.681
LL(final,Class_2) : -Inf
LL(0,Class_3) : -2975.681
LL(final,Class_3) : -3974.778
Estimated parameters : 27
Time taken (hh:mm:ss) : 00:01:19.12
pre-estimation : 00:00:4.86
estimation : 00:00:49.37
post-estimation : 00:00:24.88
Iterations : 111
Min abs eigenvalue of Hessian : 1.896665
Estimates:
Estimate s.e. t.rat.(0) Rob.s.e. Rob.t.rat.(0)
asc_1 1.954749 0.101166 19.3222 0.109861 17.7930
asc_2 0.000000 NA NA NA NA
beta_tt_a -0.522450 0.054945 -9.5086 0.068650 -7.6104
beta_tt_b 25.346517 0.015637 1620.8969 0.015222 1665.1768
beta_tc_a -0.157968 0.015308 -10.3191 0.018290 -8.6369
beta_tc_b -4.607769 0.002865 -1608.0735 0.002782 -1656.0909
beta_tt_c -0.044401 0.008022 -5.5350 0.006470 -6.8623
beta_tc_c -0.005061 0.002141 -2.3645 0.002090 -2.4219
delta_a 0.613590 0.374883 1.6368 0.432955 1.4172
gamma_elderly_a -1.077918 0.236222 -4.5632 0.240166 -4.4882
gamma_jobless_a -0.379620 0.282142 -1.3455 0.296725 -1.2794
gamma_elderly_c 0.000000 NA NA NA NA
gamma_jobless_c 0.000000 NA NA NA NA
gamma_caruser_c 0.000000 NA NA NA NA
delta_b -0.842500 0.318642 -2.6440 0.315549 -2.6699
delta_c 0.000000 NA NA NA NA
gamma_elderly_b -0.208264 0.200236 -1.0401 0.199936 -1.0417
gamma_jobless_b 0.571880 0.229897 2.4875 0.222695 2.5680
gamma_caruser_a -0.326280 0.301675 -1.0816 0.304720 -1.0708
gamma_caruser_b -0.101781 0.230203 -0.4421 0.231686 -0.4393
gamma_commuter_a 0.253200 0.476865 0.5310 0.518838 0.4880
gamma_commuter_b 0.582709 0.414724 1.4051 0.402611 1.4473
gamma_commuter_c 0.000000 NA NA NA NA
gamma_iwami_commute_a -0.145006 0.227916 -0.6362 0.226546 -0.6401
gamma_iwami_commute_b 0.044435 0.216718 0.2050 0.218427 0.2034
gamma_iwami_commute_c 0.000000 NA NA NA NA
gamma_public_commute_a -0.483673 0.452393 -1.0691 0.502178 -0.9631
gamma_public_commute_b -0.465760 0.385546 -1.2081 0.378026 -1.2321
gamma_public_commute_c 0.000000 NA NA NA NA
gamma_shop_many_a -0.141261 0.209583 -0.6740 0.208199 -0.6785
gamma_shop_many_b -0.071090 0.180231 -0.3944 0.183686 -0.3870
gamma_shop_many_c 0.000000 NA NA NA NA
gamma_shop_tottori_a -0.364021 0.201829 -1.8036 0.206121 -1.7661
gamma_shop_tottori_b 0.104647 0.172031 0.6083 0.174784 0.5987
gamma_shop_tottori_c 0.000000 NA NA NA NA
gamma_hospital_a -0.028119 0.202042 -0.1392 0.208021 -0.1352
gamma_hospital_b 0.050089 0.171492 0.2921 0.170658 0.2935
gamma_hospital_c 0.000000 NA NA NA NA
Summary of class allocation for LC model component :
Mean prob.
Class_1 0.2577
Class_2 0.2549
Class_3 0.4874
As you can see in the result section, the values are really strange.
Therefore, to wrap up my questions,
1) What is the problem of this situation? Why did this problem happen when I increase class?
I cannot get what is wrong. I read the manual many times.
FYI, the membership variable inputs are all in dummies.
2) What is the problem of "NaN" ..?
Thank you for reading this. I am looking forward to your response..
With best regards,
Hyewon