To estimate the overspecified model, I first estimated a CL model (all priors = 0) just with all levels of all attributes, including ASC and SQ. Then I increased complexity and estimated a MXL model with all levels of all attributes and SD parameters (mu_log_b_cost = -5, sigma_log_b_cost = -2, remaining priors = 0).
As you explained, I then estimated two models, with adjusted reference categories. The reference categories were the attribute levels with the lowest S.D. estimator for each attribute. SQ had a lower SD value than ASC. Nevertheless, I estimated two models once with ASC and once with SQ because I was not sure if I had defined the utility function correctly.
In both cases, the significance levels differ between the models with SQ and ASC and between the utility and WTP space within the models. However, it is less intense than in the previous model.
Second, the ASC estimator is still negative in the utility space, which aligns with our assumption. But when transformed into WTP space, it turns positive. The same is true, but with changed signs when SQ is the base level.
SQ as base level and level with lowest SD as reference level for each attribute:
Below the code for the utility space of the second approach and for the WTP space with the ASC/SQ as base level.
Code: Select all
# ########################################################################## #
#### Overspecified CL model in utility space ####
# ########################################################################## #
### Set core controls
apollo_control = list(
modelName ="CLmodel_Paper4_utility_palma",
modelDescr ="CL model in Utility Space paper 4 - palma method - overspecified",
indivID ="id",
nCores = 6
)
# ########################################################################## #
#### LOAD DATA AND APPLY ANY TRANSFORMATIONS ####
# ########################################################################## #
library(haven)
database <- read.csv("Data_UKNLFR_mf_.csv", header = TRUE)
# ########################################################################## #
#### Creating categorical variables 'government', 'transfer', 'nocluster' ####
# ########################################################################## #
# 'government' dummy
database$government_a <- ifelse(database$retailer_a == 0 & database$ngo_a == 0, 1, 0)
database$government_b <- ifelse(database$retailer_b == 0 & database$ngo_b == 0, 1, 0)
# 'transfer' dummy
database$transfer_a <- ifelse(database$label_a == 0, 1, 0)
database$transfer_b <- ifelse(database$label_b == 0, 1, 0)
# 'nocluster' dummy
database$nocluster_a <- ifelse(database$optional_a == 0 & database$required_a == 0, 1, 0)
database$nocluster_b <- ifelse(database$optional_b == 0 & database$required_b == 0, 1, 0)
# ################################################################# #
#### DEFINE MODEL PARAMETERS ####
# ################################################################# #
### Vector of parameters, including any that are kept fixed in estimation
apollo_beta=c(b_sq = 0,
b_asc = 0,
b_government = 0,
b_retailer = 0,
b_ngo = 0,
b_transfer = 0,
b_label = 0,
b_nocluster = 0,
b_optional = 0,
b_required = 0,
b_cost = 0)
# ################################################################# #
#### GROUP AND VALIDATE INPUTS ####
# ################################################################# #
### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c()
apollo_inputs = apollo_validateInputs()
# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION ####
# ################################################################# #
apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
### Attach inputs and detach after function exit
apollo_attach(apollo_beta, apollo_inputs)
on.exit(apollo_detach(apollo_beta, apollo_inputs))
### Create list of probabilities P
P = list()
### List of utilities: these must use the same names as in mnl_settings, order is irrelevant
V = list()
V[['A']] = b_asc + b_government * government_a + b_retailer * retailer_a + b_ngo * ngo_a + b_transfer * transfer_a + b_label * label_a + b_nocluster * nocluster_a + b_optional * optional_a + b_required * required_a + b_cost * cost_a
V[['B']] = b_asc + b_government * government_b + b_retailer * retailer_b + b_ngo * ngo_b + b_transfer * transfer_b + b_label * label_b + b_nocluster * nocluster_b + b_optional * optional_b + b_required * required_b + b_cost * cost_b
V[['C']] = b_sq
### Define settings for MNL model component
mnl_settings = list(
alternatives = c(A=1, B=2, C=3),
avail = 1,
choiceVar = decision,
V = V
)
### Compute probabilities using MNL model
P[['model']] = apollo_mnl(mnl_settings, functionality)
### Take product across observation for same individual
P = apollo_panelProd(P, apollo_inputs, functionality)
### Prepare and return outputs of function
P = apollo_prepareProb(P, apollo_inputs, functionality)
return(P)
}
# ################################################################# #
#### MODEL ESTIMATION ####
# ################################################################# #
model = apollo_estimate(apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs)
# ################################################################# #
#### MODEL OUTPUTS ####
# ################################################################# #
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO SCREEN) ----
# ----------------------------------------------------------------- #
apollo_modelOutput(model, list(printPVal = 2))
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO FILE, using model name) ----
# ----------------------------------------------------------------- #
apollo_saveOutput(model, list(printPVal = 2))
# ########################################################################## #
#### Overspecified MXL model in utility space ####
# ########################################################################## #
# ################################################################# #
#### Setting up the work environment ####
# ################################################################# #
### Clear memory
rm(list = ls())
#to check directory (folder where we are working)
getwd()
# to change working directory
setwd("U:/Meine Bibliothek/Paper_/Paper 4 Choice Experiment - CL MNL/5_Data/5_MainData_Collection/3_R/WTP Space/MXL in WTP - Palma Method")
# Load R packages from library
library(apollo)
library(tidyr)
library(readxl)
### Initialise code
apollo_initialise()
# ########################################################################## #
#### Overspecified MXL model in utility space ####
# ########################################################################## #
### Set core controls
apollo_control = list(
modelName ="MXLmodel_Paper4_utility_palma_overspecified",
modelDescr ="MXL model in Utility Space paper 4 - palma method - overspecified",
indivID ="id",
nCores = 6,
mixing = TRUE
)
# ########################################################################## #
#### LOAD DATA AND APPLY ANY TRANSFORMATIONS ####
# ########################################################################## #
library(haven)
database <- read.csv("Data_UKNLFR_mf_.csv", header = TRUE)
# ########################################################################## #
#### Creating categorical variables 'government', 'transfer', 'nocluster' ####
# ########################################################################## #
# 'government' dummy
database$government_a <- ifelse(database$retailer_a == 0 & database$ngo_a == 0, 1, 0)
database$government_b <- ifelse(database$retailer_b == 0 & database$ngo_b == 0, 1, 0)
# 'transfer' dummy
database$transfer_a <- ifelse(database$label_a == 0, 1, 0)
database$transfer_b <- ifelse(database$label_b == 0, 1, 0)
# 'nocluster' dummy
database$nocluster_a <- ifelse(database$optional_a == 0 & database$required_a == 0, 1, 0)
database$nocluster_b <- ifelse(database$optional_b == 0 & database$required_b == 0, 1, 0)
# ################################################################# #
#### DEFINE MODEL PARAMETERS ####
# ################################################################# #
### Vector of parameters, including any that are kept fixed in estimation
apollo_beta = c(mu_sq = 0,
mu_asc = 0,
mu_b_government = 0,
mu_b_retailer = 0,
mu_b_ngo = 0,
mu_b_transfer = 0,
mu_b_label = 0,
mu_b_nocluster = 0,
mu_b_optional = 0,
mu_b_required = 0,
mu_log_b_cost = -5,
sigma_sq = 0,
sigma_asc = 0,
sigma_b_government = 0,
sigma_b_retailer = 0,
sigma_b_ngo = 0,
sigma_b_transfer = 0,
sigma_b_label = 0,
sigma_b_nocluster = 0,
sigma_b_optional = 0,
sigma_b_required = 0,
sigma_log_b_cost = -2)
### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c() # maybe interesting for the latent class model. (independent availability model)
# ################################################################# #
#### DEFINE RANDOM COMPONENTS ####
# ################################################################# #
### Sobol: Analysis of diff. Types of draws: Czajkowski & Budzinski 2017
apollo_draws = list(
interDrawsType = "sobol", #Halton Draws are not recommended for more than five random coefficents (cf. Bhat, 2003) For computing Bayesian efficiency pseudo random draws or pseudo monte carlo are the simplest (ChoiceMetrics, 2018).
interNDraws = 3000, # More than 2000 Sobol Draws recommendet by Czajkowskiei & Budzinski 2019
interUnifDraws = c(),
interNormDraws = c("draws_sq","draws_asc", "draws_government", "draws_retailer","draws_ngo", "draws_transfer", "draws_label", "draws_nocluster", "draws_optional","draws_required","draws_cost")
)
### Create random parameters
apollo_randCoeff = function(apollo_beta, apollo_inputs){
randcoeff = list()
randcoeff[["b_sq"]] = ( mu_sq + sigma_sq * draws_sq )
randcoeff[["b_asc"]] = ( mu_asc + sigma_asc * draws_asc )
randcoeff[["b_government"]] = ( mu_b_government + sigma_b_government * draws_government )
randcoeff[["b_retailer"]] = ( mu_b_retailer + sigma_b_retailer * draws_retailer )
randcoeff[["b_ngo"]] = ( mu_b_ngo + sigma_b_ngo * draws_ngo )
randcoeff[["b_transfer"]] = ( mu_b_transfer + sigma_b_transfer * draws_transfer )
randcoeff[["b_label"]] = ( mu_b_label + sigma_b_label * draws_label )
randcoeff[["b_nocluster"]] = ( mu_b_nocluster + sigma_b_nocluster * draws_nocluster )
randcoeff[["b_optional"]] = ( mu_b_optional + sigma_b_optional * draws_optional )
randcoeff[["b_required"]] = ( mu_b_required + sigma_b_required * draws_required)
randcoeff[["b_cost"]] = exp( mu_log_b_cost + sigma_log_b_cost * draws_cost )
return(randcoeff)
}
# ################################################################# #
#### GROUP AND VALIDATE INPUTS ####
# ################################################################# #
apollo_inputs = apollo_validateInputs()
# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION ####
# ################################################################# #
apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
### Function initialization: do not change the following three commands
### Attach inputs and detach after function exit
apollo_attach(apollo_beta, apollo_inputs)
on.exit(apollo_detach(apollo_beta, apollo_inputs))
### Create list of probabilities P
P = list()
### List of utilities: these must use the same names as in mnl_settings, order is irrelevant
V = list()
V[['A']] = b_asc + b_government * government_a + b_retailer * retailer_a + b_ngo * ngo_a + b_transfer * transfer_a + b_label * label_a + b_nocluster * nocluster_a + b_optional * optional_a + b_required * required_a + b_cost * cost_a
V[['B']] = b_asc + b_government * government_b + b_retailer * retailer_b + b_ngo * ngo_b + b_transfer * transfer_b + b_label * label_b + b_nocluster * nocluster_b + b_optional * optional_b + b_required * required_b + b_cost * cost_b
V[['C']] = b_sq
### Define settings for MNL model component
mnl_settings = list(
alternatives = c(A=1, B=2, C=3),
avail = 1,
choiceVar = decision, # depend on the variable used in stata
V = V
)
### Compute probabilities using MNL model
P[['model']] = apollo_mnl(mnl_settings, functionality)
### Take product across observation for same individual
P = apollo_panelProd(P, apollo_inputs, functionality)
### Average across inter-individual draws
P = apollo_avgInterDraws(P, apollo_inputs, functionality)
### Prepare and return outputs of function
P = apollo_prepareProb(P, apollo_inputs, functionality)
return(P)
}
# ################################################################# #
#### MODEL ESTIMATION ####
# ################################################################# #
model = apollo_estimate(apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs)
# ################################################################# #
#### MODEL OUTPUTS ####
# ################################################################# #
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO SCREEN) ----
# ----------------------------------------------------------------- #
apollo_modelOutput(model, list(printPVal = 2))
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO FILE, using model name) ----
# ----------------------------------------------------------------- #
apollo_saveOutput(
model,
list(printPVal = 2)
)
#################################
# Cost Parameter Transformation #
#################################
# transform the cost parameter. Look up into the Forum
# follow link to the Forum Post: use method suggested by D Palma : http://www.apollochoicemodelling.com/forum/viewtopic.php?f=16&t=262
# sigma is the standard deviation, mu gives average preference.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="lognormal",parName1="mu_log_b_cost",parName2="sigma_log_b_cost"))
############################################################
# Estimating difference between NGO and retailer estimator #
############################################################
# compare https://www.apollochoicemodelling.com/forum/viewtopic.php?t=202.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="diff",parName1="mu_b_retailer",parName2="mu_b_ngo"))
#############################################################
# Estimating difference between FC optional and FC required #
#############################################################
# compare https://www.apollochoicemodelling.com/forum/viewtopic.php?t=202.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="diff",parName1="mu_b_optional",parName2="mu_b_required"))
# ########################################################################## #
#### Adapted Overspecified MXL model in utility space ####
# ########################################################################## #
# ################################################################# #
#### Setting up the work environment ####
# ################################################################# #
### Clear memory
rm(list = ls())
#to check directory (folder where we are working)
getwd()
# to change working directory
setwd("U:/Meine Bibliothek/Paper_/Paper 4 Choice Experiment - CL MNL/5_Data/5_MainData_Collection/3_R/WTP Space/MXL in WTP - Palma Method")
# Load R packages from library
library(apollo)
library(tidyr)
library(readxl)
### Initialise code
apollo_initialise()
# ########################################################################## #
#### Adapted Overspecified MXL model in utility space ####
# ########################################################################## #
### Set core controls
apollo_control = list(
modelName ="MXLmodel_Paper4_utility_palma_overspecified_adapted",
modelDescr ="MXL model in Utility Space paper 4 - palma method - overspecified SQ",
indivID ="id",
nCores = 6,
mixing = TRUE
)
# ########################################################################## #
#### LOAD DATA AND APPLY ANY TRANSFORMATIONS ####
# ########################################################################## #
library(haven)
database <- read.csv("Data_UKNLFR_mf_.csv", header = TRUE)
# ########################################################################## #
#### Creating categorical variables 'government', 'transfer', 'nocluster' ####
# ########################################################################## #
# 'government' dummy
database$government_a <- ifelse(database$retailer_a == 0 & database$ngo_a == 0, 1, 0)
database$government_b <- ifelse(database$retailer_b == 0 & database$ngo_b == 0, 1, 0)
# 'transfer' dummy
database$transfer_a <- ifelse(database$label_a == 0, 1, 0)
database$transfer_b <- ifelse(database$label_b == 0, 1, 0)
# 'nocluster' dummy
database$nocluster_a <- ifelse(database$optional_a == 0 & database$required_a == 0, 1, 0)
database$nocluster_b <- ifelse(database$optional_b == 0 & database$required_b == 0, 1, 0)
# ################################################################# #
#### Choosing Reference levels ####
# ################################################################# #
# Depending on the sigma parameters we choose the reference levels for the MXL model.
# The reference level is the attribute with the lowest SD.
# Reference levels:
# ASC = -0.30596
# retailer = 0.23446
# label = -0.14067
# required = 0.01226
# ################################################################# #
#### DEFINE MODEL PARAMETERS ####
# ################################################################# #
### Vector of parameters, including any that are kept fixed in estimation
apollo_beta = c(mu_sq = 0,
#mu_asc = 0,
mu_b_government = 0,
mu_b_ngo = 0,
mu_b_transfer = 0,
mu_b_nocluster = 0,
mu_b_optional = 0,
mu_log_b_cost = -5,
sigma_sq = 0,
#sigma_asc = 0,
sigma_b_government = 0,
sigma_b_ngo = 0,
sigma_b_transfer = 0,
sigma_b_nocluster = 0,
sigma_b_optional = 0,
sigma_log_b_cost = -2)
### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c() # maybe interesting for the latent class model. (independent availability model)
# ################################################################# #
#### DEFINE RANDOM COMPONENTS ####
# ################################################################# #
### Sobol: Analysis of diff. Types of draws: Czajkowski & Budzinski 2017
apollo_draws = list(
interDrawsType = "sobol", #Halton Draws are not recommended for more than five random coefficents (cf. Bhat, 2003) For computing Bayesian efficiency pseudo random draws or pseudo monte carlo are the simplest (ChoiceMetrics, 2018).
interNDraws = 3000, # More than 2000 Sobol Draws recommendet by Czajkowskiei & Budzinski 2019
interUnifDraws = c(),
interNormDraws = c("draws_sq","draws_asc", "draws_government", "draws_retailer","draws_ngo", "draws_transfer", "draws_label", "draws_nocluster", "draws_optional","draws_required","draws_cost")
)
### Create random parameters
apollo_randCoeff = function(apollo_beta, apollo_inputs){
randcoeff = list()
randcoeff[["b_sq"]] = ( mu_sq + sigma_sq * draws_sq )
#randcoeff[["b_asc"]] = ( mu_asc + sigma_asc * draws_asc )
randcoeff[["b_government"]] = ( mu_b_government + sigma_b_government * draws_government )
randcoeff[["b_ngo"]] = ( mu_b_ngo + sigma_b_ngo * draws_ngo )
randcoeff[["b_transfer"]] = ( mu_b_transfer + sigma_b_transfer * draws_transfer )
randcoeff[["b_nocluster"]] = ( mu_b_nocluster + sigma_b_nocluster * draws_nocluster )
randcoeff[["b_optional"]] = ( mu_b_optional + sigma_b_optional * draws_optional )
randcoeff[["b_cost"]] = exp( mu_log_b_cost + sigma_log_b_cost * draws_cost )
return(randcoeff)
}
# ################################################################# #
#### GROUP AND VALIDATE INPUTS ####
# ################################################################# #
apollo_inputs = apollo_validateInputs()
# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION ####
# ################################################################# #
apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
### Function initialization: do not change the following three commands
### Attach inputs and detach after function exit
apollo_attach(apollo_beta, apollo_inputs)
on.exit(apollo_detach(apollo_beta, apollo_inputs))
### Create list of probabilities P
P = list()
### List of utilities: these must use the same names as in mnl_settings, order is irrelevant
V = list()
V[['A']] = b_government * government_a + b_ngo * ngo_a + b_transfer * transfer_a + b_nocluster * nocluster_a + b_optional * optional_a + b_cost * cost_a
V[['B']] = b_government * government_b + b_ngo * ngo_b + b_transfer * transfer_b + b_nocluster * nocluster_b + b_optional * optional_b + b_cost * cost_b
V[['C']] = b_sq
#V[['C']] = 0
### Define settings for MNL model component
mnl_settings = list(
alternatives = c(A=1, B=2, C=3),
avail = 1,
choiceVar = decision, # depend on the variable used in stata
V = V
)
### Compute probabilities using MNL model
P[['model']] = apollo_mnl(mnl_settings, functionality)
### Take product across observation for same individual
P = apollo_panelProd(P, apollo_inputs, functionality)
### Average across inter-individual draws
P = apollo_avgInterDraws(P, apollo_inputs, functionality)
### Prepare and return outputs of function
P = apollo_prepareProb(P, apollo_inputs, functionality)
return(P)
}
# ################################################################# #
#### MODEL ESTIMATION ####
# ################################################################# #
model = apollo_estimate(apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs)
# ################################################################# #
#### MODEL OUTPUTS ####
# ################################################################# #
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO SCREEN) ----
# ----------------------------------------------------------------- #
apollo_modelOutput(model, list(printPVal = 2))
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO FILE, using model name) ----
# ----------------------------------------------------------------- #
apollo_saveOutput(
model,
list(printPVal = 2)
)
#################################
# Cost Parameter Transformation #
#################################
# transform the cost parameter. Look up into the Forum
# follow link to the Forum Post: use method suggested by D Palma : http://www.apollochoicemodelling.com/forum/viewtopic.php?f=16&t=262
# sigma is the standard deviation, mu gives average preference.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="lognormal",parName1="mu_log_b_cost",parName2="sigma_log_b_cost"))
############################################################
# Estimating difference between NGO and retailer estimator #
############################################################
# compare https://www.apollochoicemodelling.com/forum/viewtopic.php?t=202.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="diff",parName1="mu_b_government",parName2="mu_b_retailer"))
#############################################################
# Estimating difference between FC optional and FC required #
#############################################################
# compare https://www.apollochoicemodelling.com/forum/viewtopic.php?t=202.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="diff",parName1="mu_b_nocluster",parName2="mu_b_optional"))
# ################################################################# #
#### Setting up the work environment ####
# ################################################################# #
### Clear memory
rm(list = ls())
#to check directory (folder where we are working)
getwd()
# to change working directory
setwd("U:/Meine Bibliothek/Paper_/Paper 4 Choice Experiment - CL MNL/5_Data/5_MainData_Collection/3_R/WTP Space/MXL in WTP - Palma Method")
# Load R packages from library
library(apollo)
library(tidyr)
library(readxl)
### Initialise code
apollo_initialise()
# ########################################################################## #
#### Adapted MXL model in utility space ####
# ########################################################################## #
### Set core controls
apollo_control = list(
modelName ="MXLmodel_Paper4_WTP_palma_overspecified_adapted ASC",
modelDescr ="MXL model in WTP paper 4 - palma method - overspecified ASC",
indivID ="id",
nCores = 6,
mixing = TRUE
)
# ########################################################################## #
#### LOAD DATA AND APPLY ANY TRANSFORMATIONS ####
# ########################################################################## #
library(haven)
database <- read.csv("Data_UKNLFR_mf_.csv", header = TRUE)
# ########################################################################## #
#### Creating categorical variables 'government', 'transfer', 'nocluster' ####
# ########################################################################## #
# 'government' dummy
database$government_a <- ifelse(database$retailer_a == 0 & database$ngo_a == 0, 1, 0)
database$government_b <- ifelse(database$retailer_b == 0 & database$ngo_b == 0, 1, 0)
# 'transfer' dummy
database$transfer_a <- ifelse(database$label_a == 0, 1, 0)
database$transfer_b <- ifelse(database$label_b == 0, 1, 0)
# 'nocluster' dummy
database$nocluster_a <- ifelse(database$optional_a == 0 & database$required_a == 0, 1, 0)
database$nocluster_b <- ifelse(database$optional_b == 0 & database$required_b == 0, 1, 0)
# ################################################################# #
#### Choosing Reference levels ####
# ################################################################# #
# Depending on the sigma parameters we choose the reference levels for the MXL model.
# The reference level is the attribute with the lowest SD.
# Reference levels: NGO, transfer, required
# ASC is due to WTP transformation not the ref level anymore
# ################################################################# #
#### DEFINE MODEL PARAMETERS ####
# ################################################################# #
### Vector of parameters, including any that are kept fixed in estimation
apollo_beta = c(mu_sq = 0,
#mu_asc = 0,
mu_b_government = 0,
mu_b_ngo = 0,
mu_b_transfer = 0,
mu_b_nocluster = 0,
mu_b_optional = 0,
mu_log_b_cost = -5,
sigma_sq = 0,
# sigma_asc = 0,
sigma_b_government = 0,
sigma_b_ngo = 0,
sigma_b_transfer = 0,
sigma_b_nocluster = 0,
sigma_b_optional = 0,
sigma_log_b_cost = -2)
### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c() # maybe interesting for the latent class model. (independent availability model)
# ################################################################# #
#### DEFINE RANDOM COMPONENTS ####
# ################################################################# #
### Sobol: Analysis of diff. Types of draws: Czajkowski & Budzinski 2017
apollo_draws = list(
interDrawsType = "sobol", #Halton Draws are not recommended for more than five random coefficents (cf. Bhat, 2003) For computing Bayesian efficiency pseudo random draws or pseudo monte carlo are the simplest (ChoiceMetrics, 2018).
interNDraws = 3000, # More than 2000 Sobol Draws recommendet by Czajkowskiei & Budzinski 2019
interUnifDraws = c(),
interNormDraws = c("draws_sq","draws_asc", "draws_government", "draws_retailer","draws_ngo", "draws_transfer", "draws_label", "draws_nocluster", "draws_optional","draws_required","draws_cost")
)
### Create random parameters
apollo_randCoeff = function(apollo_beta, apollo_inputs){
randcoeff = list()
randcoeff[["b_sq"]] = ( mu_sq + sigma_sq * draws_sq )
#randcoeff[["b_asc"]] = ( mu_asc + sigma_asc * draws_asc )
randcoeff[["b_government"]] = ( mu_b_government + sigma_b_government * draws_government )
randcoeff[["b_ngo"]] = ( mu_b_ngo + sigma_b_ngo * draws_ngo )
randcoeff[["b_transfer"]] = ( mu_b_transfer + sigma_b_transfer * draws_transfer )
randcoeff[["b_nocluster"]] = ( mu_b_nocluster + sigma_b_nocluster * draws_nocluster )
randcoeff[["b_optional"]] = ( mu_b_optional + sigma_b_optional * draws_optional )
randcoeff[["b_cost"]] = exp( mu_log_b_cost + sigma_log_b_cost * draws_cost )
return(randcoeff)
}
# ################################################################# #
#### GROUP AND VALIDATE INPUTS ####
# ################################################################# #
apollo_inputs = apollo_validateInputs()
# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION ####
# ################################################################# #
apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
### Function initialization: do not change the following three commands
### Attach inputs and detach after function exit
apollo_attach(apollo_beta, apollo_inputs)
on.exit(apollo_detach(apollo_beta, apollo_inputs))
### Create list of probabilities P
P = list()
### List of utilities: these must use the same names as in mnl_settings, order is irrelevant
V = list()
#V[['A']] = b_cost * (b_asc + b_government * government_a + b_retailer * retailer_a + b_label * label_a + b_nocluster * nocluster_a + b_optional * optional_a + cost_a)
#V[['B']] = b_cost * (b_asc + b_government * government_b + b_retailer * retailer_b + b_label * label_b + b_nocluster * nocluster_b + b_optional * optional_b + cost_b)
#V[['C']] = 0
#V[['A']] = b_cost * (b_asc + b_government * government_a + b_ngo * ngo_a + b_transfer * transfer_a + b_nocluster * nocluster_a + b_optional * optional_a + b_cost * cost_a)
#V[['B']] = b_cost * (b_asc + b_government * government_b + b_ngo * ngo_b + b_transfer * transfer_b + b_nocluster * nocluster_b + b_optional * optional_b + b_cost * cost_b)
#V[['C']] = 0
# Results of the V with sq times b_sq makes no sense.
V[['A']] = b_cost * (b_government * government_a + b_ngo * ngo_a + b_transfer * transfer_a + b_nocluster * nocluster_a + b_optional * optional_a + cost_a)
V[['B']] = b_cost * (b_government * government_b + b_ngo * ngo_b + b_transfer * transfer_b + b_nocluster * nocluster_b + b_optional * optional_b + cost_b)
V[['C']] = b_cost * ( b_sq)
### Define settings for MNL model component
mnl_settings = list(
alternatives = c(A=1, B=2, C=3),
avail = 1,
choiceVar = decision, # depend on the variable used in stata
V = V
)
### Compute probabilities using MNL model
P[['model']] = apollo_mnl(mnl_settings, functionality)
### Take product across observation for same individual
P = apollo_panelProd(P, apollo_inputs, functionality)
### Average across inter-individual draws
P = apollo_avgInterDraws(P, apollo_inputs, functionality)
### Prepare and return outputs of function
P = apollo_prepareProb(P, apollo_inputs, functionality)
return(P)
}
# ################################################################# #
#### MODEL ESTIMATION ####
# ################################################################# #
model = apollo_estimate(apollo_beta, apollo_fixed, apollo_probabilities, apollo_inputs)
# ################################################################# #
#### MODEL OUTPUTS ####
# ################################################################# #
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO SCREEN) ----
# ----------------------------------------------------------------- #
apollo_modelOutput(model, list(printPVal = 2))
## Output of s.e., t.rat(0), p(2-sided), Rob.s.e., Rob.t.rat.(0), p(2-sided) => NA
## Maybe because of too little observations?
## But I currently do not know which values I need to put into the vectors of the parameters
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO FILE, using model name) ----
# ----------------------------------------------------------------- #
apollo_saveOutput(
model,
list(printPVal = 2)
)
#################################
# Cost Parameter Transformation #
#################################
# transform the cost parameter. Look up into the Forum
# follow link to the Forum Post: use method suggested by D Palma : http://www.apollochoicemodelling.com/forum/viewtopic.php?f=16&t=262
# sigma is the standard deviation, mu gives average preference.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="lognormal",parName1="mu_log_b_cost",parName2="sigma_log_b_cost"))
############################################################
# Estimating difference between NGO and retailer estimator #
############################################################
# compare https://www.apollochoicemodelling.com/forum/viewtopic.php?t=202.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="diff",parName1="mu_b_government",parName2="mu_b_retailer"))
#############################################################
# Estimating difference between FC optional and FC required #
#############################################################
# compare https://www.apollochoicemodelling.com/forum/viewtopic.php?t=202.
apollo_deltaMethod(model, deltaMethod_settings = list(operation="diff",parName1="mu_b_nocluster",parName2="mu_b_optional"))
All models in utility space differ quite strongly from the initial MXL estimated in utility space using the standard approach from the Apollo scripts.
Each estimation leads to results that differ between utility and WTP spaces. In all cases, strong heterogeneity remains. Furthermore, the ASC/SQ estimator in the WTP space does not align with the utility space. An LL ratio test shows us that the first approach (correlation model) has the best model fit.
Considering all the information, I am concerned about whether I have done the estimation right and, if yes, how I should decide on one of the models?
To make the above passage easier to comprehend, I have added my original, standard mixed logit model in the utility space below. The 'Standard Mixed logit Model' shows the model in the utility space, which, like the MMNL in the preference space, was coded on the basis of the Apollo example files (
Thank you very much for your help already.
Looking forward to reading your thoughts.