Important: Read this before posting to this forum

  1. This forum is for questions related to the use of Apollo. We will answer some general choice modelling questions too, where appropriate, and time permitting. We cannot answer questions about how to estimate choice models with other software packages.
  2. There is a very detailed manual for Apollo available at http://www.ApolloChoiceModelling.com/manual.html. This contains detailed descriptions of the various Apollo functions, and numerous examples are available at http://www.ApolloChoiceModelling.com/examples.html. In addition, help files are available for all functions, using e.g. ?apollo_mnl
  3. Before asking a question on the forum, users are kindly requested to follow these steps:
    1. Check that the same issue has not already been addressed in the forum - there is a search tool.
    2. Ensure that the correct syntax has been used. For any function, detailed instructions are available directly in Apollo, e.g. by using ?apollo_mnl for apollo_mnl
    3. Check the frequently asked questions section on the Apollo website, which discusses some common issues/failures. Please see http://www.apollochoicemodelling.com/faq.html
    4. Make sure that R is using the latest official release of Apollo.
  4. If the above steps do not resolve the issue, then users should follow these steps when posting a question:
    1. provide full details on the issue, including the entire code and output, including any error messages
    2. posts will not immediately appear on the forum, but will be checked by a moderator first. This may take a day or two at busy times. There is no need to submit the post multiple times.

When I increase the class of the Latent Class, the result is strange.

Ask questions about model specifications. Ideally include a mathematical explanation of your proposed model.
Post Reply
Hyewon
Posts: 11
Joined: 28 Dec 2020, 04:30

When I increase the class of the Latent Class, the result is strange.

Post by Hyewon »

Dear Stephane,

Firstly, thank you so much for your contribution to this tool.

I am estimating the latent class model, there are Binary choices, and I am trying to increase the classes from 2.
The beta for the choice probabilities is "Total time" and "cost", the membership variables are socio-demographic.

However, when I just use 2 classes, it was okay, but when I increase to 3, there is several errors.
1) sometimes, it shows "NaN" in standard deviation.
2) Then, I revise the membership variable, then the value of the beta is Really strange.

Can you get through the code and the results, give any comments about the problem?
Too sad I cannot go forward because even class 3 does not work.

Code: Select all

### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c("asc_2",  "delta_c",  "gamma_jobless_c",
                 "gamma_caruser_c",  "gamma_commuter_c", "gamma_public_commute_c",
                 "gamma_shop_tottori_c",  "gamma_hospital_c",  "gamma_shop_many_c",
                 "gamma_iwami_commute_c", "gamma_elderly_c")
#apollo_fixed = c()
# ################################################################# #
#### DEFINE LATENT CLASS COMPONENTS                              ####
# ################################################################# #

apollo_lcPars=function(apollo_beta, apollo_inputs){
  lcpars = list()
  lcpars[["beta_tt"]] = list(beta_tt_a, beta_tt_b, beta_tt_c)
  lcpars[["beta_tc"]] = list(beta_tc_a, beta_tc_b, beta_tc_c)
  
  V=list()
  V[["class_a"]] = delta_a + gamma_elderly_a*elderly + gamma_jobless_a*job2 + gamma_caruser_a * caruser +
    gamma_commuter_a*commuter + gamma_iwami_commute_a*iwami_commute + gamma_public_commute_a*public_commute +
      gamma_shop_many_a * shop_many + gamma_shop_tottori_a*shop_tottori +
    gamma_hospital_a * hospital
  
  V[["class_b"]] = delta_b + gamma_elderly_b*elderly + gamma_jobless_b*job2 + gamma_caruser_b * caruser +
    gamma_commuter_b*commuter + gamma_iwami_commute_b*iwami_commute + gamma_public_commute_b*public_commute +
      gamma_shop_many_b * shop_many + gamma_shop_tottori_b*shop_tottori +
     gamma_hospital_b * hospital
  
  V[["class_c"]] = delta_c + gamma_elderly_c*elderly + gamma_jobless_c*job2 + gamma_caruser_c * caruser +
    gamma_commuter_c*commuter + gamma_iwami_commute_c*iwami_commute + gamma_public_commute_c*public_commute +
      gamma_shop_many_c * shop_many + gamma_shop_tottori_c*shop_tottori +
     gamma_hospital_c * hospital
  
  
  mnl_settings = list(
    alternatives = c(class_a=1, class_b=2, class_c = 3), 
    avail        = 1,   # all of the alternatives are available for every choice observation in the data
    choiceVar    = NA, 
    V            = V
  )
  lcpars[["pi_values"]] = apollo_mnl(mnl_settings, functionality="raw")
  
  lcpars[["pi_values"]] = apollo_firstRow(lcpars[["pi_values"]], apollo_inputs)
  
  return(lcpars)
}

# ################################################################# #
#### GROUP AND VALIDATE INPUTS                                   ####
# ################################################################# #

apollo_inputs = apollo_validateInputs()

# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION                        ####
# ################################################################# #

apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
  
  ### Attach inputs and detach after function exit
  apollo_attach(apollo_beta, apollo_inputs)
  on.exit(apollo_detach(apollo_beta, apollo_inputs))
  
  ### Create list of probabilities P
  P = list()
  
  ### Define settings for MNL model component that are generic across classes
  mnl_settings = list(
    alternatives = c(alt1=2, alt2=1),
    avail        = list(alt1=1, alt2=1),
    choiceVar    = choice
  )
  
  ### Loop over classes
  s=1
  while(s<=3){
    
    ### Compute class-specific utilities
    V=list()
    V[['alt1']]  = asc_1 + beta_tt[[s]]*TT1  + beta_tc[[s]]*TC1
    V[['alt2']]  = asc_2 + beta_tt[[s]]*TT2  + beta_tc[[s]]*TC2

    mnl_settings$V = V
    mnl_settings$componentName = paste0("Class_",s)
    
    ### Compute within-class choice probabilities using MNL model
    P[[paste0("Class_",s)]] = apollo_mnl(mnl_settings, functionality)
    
    ### Take product across observation for same individual
    P[[paste0("Class_",s)]] = apollo_panelProd(P[[paste0("Class_",s)]], apollo_inputs ,functionality)
    
    s=s+1}
  
  ### Compute latent class model probabilities
  lc_settings   = list(inClassProb = P, classProb=pi_values)
  P[["model"]] = apollo_lc(lc_settings, apollo_inputs, functionality)
  
  ### Prepare and return outputs of function
  P = apollo_prepareProb(P, apollo_inputs, functionality)
  return(P)
}


# ################################################################# #
#### MODEL ESTIMATION                                            ####
# ################################################################# #

#apollo_beta=apollo_searchStart(apollo_beta, apollo_fixed,apollo_probabilities, apollo_inputs)
#apollo_outOfSample(apollo_beta, apollo_fixed,apollo_probabilities, apollo_inputs)

### Estimate model
model = apollo_estimate(apollo_beta, apollo_fixed, 
                        apollo_probabilities, apollo_inputs,
                        estimate_settings=list(writeIter=FALSE))

### Show output in screen
apollo_modelOutput(model)
and the results are as follows.

Code: Select all

Estimation method                : bfgs
Model diagnosis                  : successful convergence 
Number of individuals            : 1165
Number of rows in database       : 4293
Number of modelled outcomes      : 4293

Number of cores used             :  3 
Model without mixing

LL(start)                        : -2975.681
LL(0, whole model)               : -2975.681
LL(final, whole model)           : -2023.609
Rho-square (0)                   : Not applicable
Adj.Rho-square (0)               : Not applicable
AIC                              :  4101.22 
BIC                              :  4273.07 

LL(0,Class_1)                    : -2975.681
LL(final,Class_1)                : -9340.931
LL(0,Class_2)                    : -2975.681
LL(final,Class_2)                : -Inf
LL(0,Class_3)                    : -2975.681
LL(final,Class_3)                : -3974.778

Estimated parameters             :  27
Time taken (hh:mm:ss)            :  00:01:19.12 
     pre-estimation              :  00:00:4.86 
     estimation                  :  00:00:49.37 
     post-estimation             :  00:00:24.88 
Iterations                       :  111  
Min abs eigenvalue of Hessian    :  1.896665 

Estimates:
                          Estimate        s.e.   t.rat.(0)    Rob.s.e. Rob.t.rat.(0)
asc_1                     1.954749    0.101166     19.3222    0.109861       17.7930
asc_2                     0.000000          NA          NA          NA            NA
beta_tt_a                -0.522450    0.054945     -9.5086    0.068650       -7.6104
beta_tt_b                25.346517    0.015637   1620.8969    0.015222     1665.1768
beta_tc_a                -0.157968    0.015308    -10.3191    0.018290       -8.6369
beta_tc_b                -4.607769    0.002865  -1608.0735    0.002782    -1656.0909
beta_tt_c                -0.044401    0.008022     -5.5350    0.006470       -6.8623
beta_tc_c                -0.005061    0.002141     -2.3645    0.002090       -2.4219
delta_a                   0.613590    0.374883      1.6368    0.432955        1.4172
gamma_elderly_a          -1.077918    0.236222     -4.5632    0.240166       -4.4882
gamma_jobless_a          -0.379620    0.282142     -1.3455    0.296725       -1.2794
gamma_elderly_c           0.000000          NA          NA          NA            NA
gamma_jobless_c           0.000000          NA          NA          NA            NA
gamma_caruser_c           0.000000          NA          NA          NA            NA
delta_b                  -0.842500    0.318642     -2.6440    0.315549       -2.6699
delta_c                   0.000000          NA          NA          NA            NA
gamma_elderly_b          -0.208264    0.200236     -1.0401    0.199936       -1.0417
gamma_jobless_b           0.571880    0.229897      2.4875    0.222695        2.5680
gamma_caruser_a          -0.326280    0.301675     -1.0816    0.304720       -1.0708
gamma_caruser_b          -0.101781    0.230203     -0.4421    0.231686       -0.4393
gamma_commuter_a          0.253200    0.476865      0.5310    0.518838        0.4880
gamma_commuter_b          0.582709    0.414724      1.4051    0.402611        1.4473
gamma_commuter_c          0.000000          NA          NA          NA            NA
gamma_iwami_commute_a    -0.145006    0.227916     -0.6362    0.226546       -0.6401
gamma_iwami_commute_b     0.044435    0.216718      0.2050    0.218427        0.2034
gamma_iwami_commute_c     0.000000          NA          NA          NA            NA
gamma_public_commute_a   -0.483673    0.452393     -1.0691    0.502178       -0.9631
gamma_public_commute_b   -0.465760    0.385546     -1.2081    0.378026       -1.2321
gamma_public_commute_c    0.000000          NA          NA          NA            NA
gamma_shop_many_a        -0.141261    0.209583     -0.6740    0.208199       -0.6785
gamma_shop_many_b        -0.071090    0.180231     -0.3944    0.183686       -0.3870
gamma_shop_many_c         0.000000          NA          NA          NA            NA
gamma_shop_tottori_a     -0.364021    0.201829     -1.8036    0.206121       -1.7661
gamma_shop_tottori_b      0.104647    0.172031      0.6083    0.174784        0.5987
gamma_shop_tottori_c      0.000000          NA          NA          NA            NA
gamma_hospital_a         -0.028119    0.202042     -0.1392    0.208021       -0.1352
gamma_hospital_b          0.050089    0.171492      0.2921    0.170658        0.2935
gamma_hospital_c          0.000000          NA          NA          NA            NA


Summary of class allocation for LC model component :
         Mean prob.
Class_1      0.2577
Class_2      0.2549
Class_3      0.4874


As you can see in the result section, the values are really strange.
Therefore, to wrap up my questions,

1) What is the problem of this situation? Why did this problem happen when I increase class?
I cannot get what is wrong. I read the manual many times.
FYI, the membership variable inputs are all in dummies.

2) What is the problem of "NaN" ..?


Thank you for reading this. I am looking forward to your response..

With best regards,
Hyewon
stephanehess
Site Admin
Posts: 974
Joined: 24 Apr 2020, 16:29

Re: When I increase the class of the Latent Class, the result is strange.

Post by stephanehess »

Hi

latent class models often run into problems when you go beyond two classes, but there are solutions.

it is quite likely that your model has ended up in a poor local optimum. In addition, the parameter values in the second class would imply that the slower option is always chosen.

This could be a starting values issue, but I would also suggest you try using the EM algorithm instead of BFGS - have a look at the example online for that.

Stephane
--------------------------------
Stephane Hess
www.stephanehess.me.uk
Hyewon
Posts: 11
Joined: 28 Dec 2020, 04:30

Re: When I increase the class of the Latent Class, the result is strange.

Post by Hyewon »

Dear Stephane,

Thank you so much for your prompt and clear answer always.
I tried the EM algorithm for the Latent Class models, following "example 28" and the manual which was released this year.
However, the results got worse. :geek:

Here I attach the code and the results.
It is sad that this model has not advanced beyond three classes.
I would appreciate it if you could let me know what the problem is.

Code: Select all

# ################################################################# #
#### DEFINE MODEL PARAMETERS                                     ####
# ################################################################# #

### Vector of parameters, including any that are kept fixed in estimation
apollo_beta = c(asc_1_a         = 0,
                asc_1_b         = 0,
                asc_1_c = 0,
                asc_2           = 0,
                beta_tt_a = 0,
                beta_tt_b = 0,
                beta_tt_c = 0,
                beta_tc_a       = 0,
                beta_tc_b       = 0,
                beta_tc_c = 0,
                
                delta_a         = 0,
                delta_b         = 0,
                delta_c = 0,
                
                gamma_elderly_a = 0,
                gamma_elderly_b = 0,
                gamma_elderly_c = 0,
                gamma_jobless_a  = 0,
                gamma_jobless_b  = 0,
                gamma_jobless_c  = 0,
                gamma_caruser_a = 0,
                gamma_caruser_b = 0,
                gamma_caruser_c = 0,
               
                gamma_commuter_a = 0,
                gamma_commuter_b = 0,
                gamma_commuter_c = 0,
                
                gamma_iwami_commute_a = 0,
                gamma_iwami_commute_b = 0,
                gamma_iwami_commute_c = 0,
                
                gamma_public_commute_a = 0,
                gamma_public_commute_b = 0,
                gamma_public_commute_c = 0,
                
                gamma_shop_many_a = 0,
                gamma_shop_many_b = 0,
                gamma_shop_many_c = 0,
                
                gamma_shop_tottori_a = 0,
                gamma_shop_tottori_b = 0,
                gamma_shop_tottori_c = 0,
                
                gamma_hospital_a = 0,
                gamma_hospital_b = 0,
                gamma_hospital_c = 0)

### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c("asc_2",  "delta_c",  "gamma_jobless_c",
                 "gamma_caruser_c",  "gamma_commuter_c", "gamma_public_commute_c",
                 "gamma_shop_tottori_c",  "gamma_hospital_c",  "gamma_shop_many_c",
                 "gamma_iwami_commute_c", "gamma_elderly_c")

# ################################################################# #
#### DEFINE LATENT CLASS COMPONENTS                              ####
# ################################################################# #

apollo_lcPars=function(apollo_beta, apollo_inputs){
  lcpars = list()
  lcpars[["asc_1"]] = list(asc_1_a, asc_1_b, asc_1_c)
  lcpars[["beta_tt"]] = list(beta_tt_a, beta_tt_b, beta_tt_c)
  
  lcpars[["beta_tc"]] = list(beta_tc_a, beta_tc_b, beta_tc_c)
  
  V=list()
  V[["class_a"]] = delta_a + gamma_elderly_a*elderly + gamma_jobless_a*job2 + gamma_caruser_a * caruser +
    gamma_commuter_a*commuter + gamma_iwami_commute_a*iwami_commute + gamma_public_commute_a*public_commute +
    gamma_shop_many_a * shop_many + gamma_shop_tottori_a*shop_tottori + gamma_hospital_a * hospital
  
  V[["class_b"]] = delta_b + gamma_elderly_b*elderly + gamma_jobless_b*job2 + gamma_caruser_b * caruser +
    gamma_commuter_b*commuter + gamma_iwami_commute_b*iwami_commute + gamma_public_commute_b*public_commute +
    gamma_shop_many_b * shop_many + gamma_shop_tottori_b*shop_tottori +gamma_hospital_b * hospital
  
  V[["class_c"]] = delta_c + gamma_elderly_c*elderly + gamma_jobless_c*job2 + gamma_caruser_c * caruser +
    gamma_commuter_c*commuter + gamma_iwami_commute_c*iwami_commute + gamma_public_commute_c*public_commute +
    gamma_shop_many_c * shop_many + gamma_shop_tottori_c*shop_tottori + gamma_hospital_c * hospital
  
  mnl_settings = list(
    alternatives = c(class_a=1, class_b=2, class_c = 3), 
    avail        = 1, 
    choiceVar    = NA, 
    V            = V
  )
  lcpars[["pi_values"]] = apollo_mnl(mnl_settings, functionality="raw")
  
  lcpars[["pi_values"]] = apollo_firstRow(lcpars[["pi_values"]], apollo_inputs)
  
  return(lcpars)
}

# ################################################################# #
#### GROUP AND VALIDATE INPUTS                                   ####
# ################################################################# #

apollo_inputs = apollo_validateInputs()

# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION                        ####
# ################################################################# #

apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
  
  ### Attach inputs and detach after function exit
  apollo_attach(apollo_beta, apollo_inputs)
  on.exit(apollo_detach(apollo_beta, apollo_inputs))
  
  ### Create list of probabilities P
  P = list()
  
  ### Define settings for MNL model component that are generic across classes
  mnl_settings = list(
    alternatives = c(alt1=2, alt2=1),
    avail        = list(alt1=1, alt2=1),
    choiceVar    = choice
  )
  
  ### Loop over classes
  for(s in 1:length(pi_values)){
    
    ### Compute class-specific utilities
    V=list()
    V[['alt1']]  = asc_1[[s]] + beta_tt[[s]]*TT1  + beta_tc[[s]]*TC1
    V[['alt2']]  = asc_2      + beta_tt[[s]]*TT2  + beta_tc[[s]]*TC2
    
    mnl_settings$V = V
    mnl_settings$componentName = paste0("Class_",s)
    
    ### Compute within-class choice probabilities using MNL model
    P[[paste0("Class_",s)]] = apollo_mnl(mnl_settings, functionality)
    
    ### Take product across observation for same individual
    P[[paste0("Class_",s)]] = apollo_panelProd(P[[paste0("Class_",s)]], apollo_inputs ,functionality)
  }
  
  ### Compute latent class model probabilities
  lc_settings   = list(inClassProb = P, classProb=pi_values)
  P[["model"]] = apollo_lc(lc_settings, apollo_inputs, functionality)
  
  ### Prepare and return outputs of function
  P = apollo_prepareProb(P, apollo_inputs, functionality)
  return(P)
}

# ################################################################# #
#### EM ESTIMATION                                               ####
# ################################################################# #

model=apollo_lcEM(apollo_beta, apollo_fixed, apollo_probabilities, 
                  apollo_inputs)

# ################################################################# #
#### MODEL OUTPUTS                                               ####
# ################################################################# #

# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO SCREEN)                               ----
# ----------------------------------------------------------------- #

apollo_modelOutput(model)

Also, the following is the result of the code.

Code: Select all

Model run using Apollo for R, version 0.2.2 on Darwin by hyewon 
www.ApolloChoiceModelling.com

Model name                       : Apollo_example_28
Model description                : LC model with class allocation model on Swiss route choice data, EM algorithm
Model run at                     : 2021-10-26 20:08:17
Estimation method                : EM algorithm (bfgs) -> Maximum likelihood (bfgs)
Model diagnosis                  : successful convergence 
Number of individuals            : 1165
Number of rows in database       : 4293
Number of modelled outcomes      : 4293

Number of cores used             :  1 
Model without mixing

LL(start)                        : -2975.681
LL(0, whole model)               : -2975.681
LL(final, whole model)           : -2614.235
Rho-square (0)                   : Not applicable
Adj.Rho-square (0)               : Not applicable
AIC                              :  5286.47 
BIC                              :  5471.05 

LL(0,Class_1)                    : -2975.681
LL(final,Class_1)                : -2614.235
LL(0,Class_2)                    : -2975.681
LL(final,Class_2)                : -2614.235
LL(0,Class_3)                    : -2975.681
LL(final,Class_3)                : -2614.235

Estimated parameters             :  29
Time taken (hh:mm:ss)            :  00:00:36.01 
     pre-estimation              :  00:00:1.2 
     estimation                  :  00:00:6.13 
     post-estimation             :  00:00:28.69 
Iterations                       :  3 (EM) & 4 (bfgs)  
Min abs eigenvalue of Hessian    :  76877.86 
Some eigenvalues of Hessian are positive, indicating potential problems!

Estimates:
                          Estimate        s.e.   t.rat.(0)    Rob.s.e. Rob.t.rat.(0)
asc_1_a                   0.045319    0.039056       1.160    0.061475        0.7372
asc_1_b                   0.045319    0.039056       1.160    0.061475        0.7372
asc_1_c                   0.045319         NaN         NaN    0.061495        0.7369
asc_2                     0.000000          NA          NA          NA            NA
beta_tt_a                -0.059964    0.003069     -19.542    0.002991      -20.0495
beta_tt_b                -0.059964    0.003069     -19.542    0.002991      -20.0495
beta_tt_c                -0.059964    0.002726     -21.997    0.002991      -20.0476
beta_tc_a                -0.007289  4.3528e-04     -16.746  4.6164e-04      -15.7895
beta_tc_b                -0.007289  4.3528e-04     -16.746  4.6164e-04      -15.7895
beta_tc_c                -0.007289         NaN         NaN  4.6173e-04      -15.7868
delta_a                   0.000000         NaN         NaN  4.4879e-04        0.0000
delta_b                   0.000000         NaN         NaN  4.4879e-04        0.0000
delta_c                   0.000000          NA          NA          NA            NA
gamma_elderly_a           0.000000         NaN         NaN    0.001668        0.0000
gamma_elderly_b           0.000000         NaN         NaN    0.001668        0.0000
gamma_elderly_c           0.000000          NA          NA          NA            NA
gamma_jobless_a           0.000000         NaN         NaN    0.002250        0.0000
gamma_jobless_b           0.000000         NaN         NaN    0.002250        0.0000
gamma_jobless_c           0.000000          NA          NA          NA            NA
gamma_caruser_a           0.000000         NaN         NaN    0.001680        0.0000
gamma_caruser_b           0.000000         NaN         NaN    0.001680        0.0000
gamma_caruser_c           0.000000          NA          NA          NA            NA
gamma_commuter_a          0.000000         NaN         NaN  2.9566e-04        0.0000
gamma_commuter_b          0.000000         NaN         NaN  2.9566e-04        0.0000
gamma_commuter_c          0.000000          NA          NA          NA            NA
gamma_iwami_commute_a     0.000000         NaN         NaN  8.3254e-04        0.0000
gamma_iwami_commute_b     0.000000         NaN         NaN  8.3254e-04        0.0000
gamma_iwami_commute_c     0.000000          NA          NA          NA            NA
gamma_public_commute_a    0.000000         NaN         NaN    0.001858        0.0000
gamma_public_commute_b    0.000000         NaN         NaN    0.001858        0.0000
gamma_public_commute_c    0.000000          NA          NA          NA            NA
gamma_shop_many_a         0.000000         NaN         NaN    0.006217        0.0000
gamma_shop_many_b         0.000000         NaN         NaN    0.006217        0.0000
gamma_shop_many_c         0.000000          NA          NA          NA            NA
gamma_shop_tottori_a      0.000000         NaN         NaN  9.0574e-04        0.0000
gamma_shop_tottori_b      0.000000         NaN         NaN  9.0574e-04        0.0000
gamma_shop_tottori_c      0.000000          NA          NA          NA            NA
gamma_hospital_a          0.000000         NaN         NaN    0.001806        0.0000
gamma_hospital_b          0.000000         NaN         NaN    0.001806        0.0000
gamma_hospital_c          0.000000          NA          NA          NA            NA


Summary of class allocation for LC model component :
         Mean prob.
Class_1      0.3333
Class_2      0.3333
Class_3      0.3333
Thank you for your cooperation, always.
stephanehess
Site Admin
Posts: 974
Joined: 24 Apr 2020, 16:29

Re: When I increase the class of the Latent Class, the result is strange.

Post by stephanehess »

Hi

you should use different starting values in the different classes, otherwise they collapse in the way they do in your example using the EM algorithm

Stephane
--------------------------------
Stephane Hess
www.stephanehess.me.uk
Post Reply