I am posting the "more complex model" for which I am getting INF and NA in the output. At first I though it was an overspecification issue so tried using only one variable with one indicator. I also thought it could be a problem with the starting values so I used the values from the reduced form of the hybrid (only choice component). Finally I though try even simpler and recode the model to be in preference space, to have no interactions of the LV, or even to have a fixed cost. However I am getting the same error for every attempt I make.
I have tried everything I can think of, so I was wondering if it is a specification problem or some coding mistake.
Code: Select all
###########################################
############################################
#hybrid model dummy clean attribute logrnormal cost wtp--------------------------------------------------
############################################
############################################
###imp to work wtp space and lognormal requires to scale cost!
# ################################################################# #
#### LOAD LIBRARY AND DEFINE CORE SETTINGS ####
# ################################################################# #
### Clear memory
rm(list = ls())
### Load Apollo library
library(apollo)
### Initialise code
apollo_initialise()
### Set core controls
apollo_control = list(
modelName ="ApolloHXL_cfix_cleandummy_logc_wtp_scaledcost",
modelDescr ="Mixed logit model, uncorrelated Lognormals in utility space",
indivID ="ID",
mixing = TRUE,
nCores = 3
)
# ################################################################# #
#### LOAD DATA AND APPLY ANY TRANSFORMATIONS ####
# ################################################################# #
datafilename="data_wide3.csv"
database=read.csv(datafilename,header=T)
names(database)
database$concerned[is.na(database$concerned)] <- 3
database$concerned[database$concerned == 0] <- 1
###imp!!! check coding
#prepare database
#none; 1 = status quo option, 0 otherwise
#database$SQ<-ifelse(database$concept==1,1,0)
#database$ASC<-ifelse(database$concept==1,0,1)
freq(database$none_1)
freq(database$none_2)
freq(database$none_3)
database<-plyr::rename(database, c("person"="ID","set"="task","none_1"="SQ_1","none_2"="SQ_2","none_3"="SQ_3"))
##modify all attitudinal variables# CHECK CODING!!!
database<-database%>%
###the plastic variable with underscore is the right one!
#dplyr::mutate(serious_plastic=factor(serious_plastic,levels=c(1,2,3,4,5),labels=c("Very serious","Serious","Not very serious","Not at all serious","I don't know")))%>%#is this labelling correct?
#dplyr::mutate(serious_plastic=ordered(serious_plastic,levels=c("Very serious","Serious","Not very serious","Not at all serious","I don't know")))%>%
#dplyr::mutate(serious_plastic=as.numeric(serious_plastic))%>%
#dplyr::mutate(seriousplastic=factor(seriousplastic,levels=c(1,2,3,4,5),labels=c("Very serious","Serious","Not very serious","Not at all serious","I don't know")))%>%#is this labelling correct?
#dplyr::mutate(seriousplastic=ordered(seriousplastic,levels=c("Very serious","Serious","Not very serious","Not at all serious","I don't know")))%>%
#dplyr::mutate(seriousplastic=as.numeric(seriousplastic))%>%
dplyr::mutate(cleanup_responsibility=as.numeric(cleanup_responsibility))%>%
dplyr::mutate(cleanup_responsibility=factor(cleanup_responsibility,levels=c(1,2,3,4,5),labels=c("Governments","Individuals","I don't know","Businesses","Other (please specify)")))%>%#is this labelling correct?
#dplyr::mutate(cleanup_responsibility=as.numeric(cleanup_responsibility))%>%
dplyr::mutate(reduce_responsibility=as.numeric(reduce_responsibility))%>%
dplyr::mutate(reduce_responsibility=factor(reduce_responsibility,levels=c(1,2,3,4,5),labels=c("Governments","Individuals","I don't know","Businesses","Other (please specify)")))%>%#is this labelling correct?
#dplyr::mutate(reduce_responsibility=as.numeric(reduce_responsibility))%>%
#dplyr::mutate(peoplecare=as.numeric(peoplecare))%>%
#dplyr::mutate(peoplecare=factor(peoplecare,levels=c(1,2,3,4,5),labels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%#is this labelling correct?
#dplyr::mutate(peoplecare=ordered(peoplecare,levels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%
#dplyr::mutate(peoplecare=as.numeric(peoplecare))%>%
#dplyr::mutate(peopletakesteps=as.numeric(peopletakesteps))%>%
#dplyr::mutate(peopletakesteps= factor(peopletakesteps,levels=c(1,2,3,4,5),labels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%#is this labelling correct?
#dplyr::mutate(peopletakesteps=ordered(peopletakesteps,levels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%
#dplyr::mutate(peopletakesteps=as.numeric(peopletakesteps))%>%
#dplyr::mutate(participate_beachclean=as.numeric(participate_beachclean))%>%
#dplyr::mutate(participate_beachclean=factor (participate_beachclean,levels=c(1,2),labels=c("Yes","No")))%>%#is this labelling correct?
#dplyr::mutate(participate_beachclean=as.numeric(participate_beachclean))%>%
#dplyr::mutate(pickuptrash=as.numeric(pickuptrash))%>%
#dplyr::mutate(pickuptrash= factor(pickuptrash,levels=c(1,2,3,4,5),labels=c("Always","Usually","Sometimes","Rarely","Never")))%>%#is this labelling correct?
#dplyr::mutate(pickuptrash= ordered(pickuptrash,levels=c("Always","Usually","Sometimes","Rarely","Never")))%>%
#dplyr::mutate(pickuptrash=as.numeric(pickuptrash))%>%
#dplyr::mutate(concerned=as.numeric(concerned))%>%
#dplyr::mutate(concerned= factor(concerned,levels=c(1,2,3,4,5),labels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%#is this labelling correct?
#dplyr::mutate(concerned= ordered(concerned,levels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%
#dplyr::mutate(concerned=as.numeric(concerned))%>%
#dplyr::mutate(reduceplastic=as.numeric(reduceplastic))%>%
#dplyr::mutate(reduceplastic= factor(reduceplastic,levels=c(1,2,3,4,5),labels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%#is this labelling correct?
#dplyr::mutate(reduceplastic= ordered(reduceplastic,levels=c("Strongly Disagree","Disagree","Neither","Agree","Strongly Agree")))%>%
#dplyr::mutate(reduceplastic=as.numeric(reduceplastic))%>%
#attributes
dplyr::mutate(cleanup_1=as.numeric(cleanup_1))%>%
dplyr::mutate(cleanup_1= factor(cleanup_1,levels=c(0.33,0.5,1,2),labels=c("every 3 years","every other year","every year","twice per year")))%>%#is this labelling correct?
dplyr::mutate(cleanup_2=as.numeric(cleanup_2))%>%
dplyr::mutate(cleanup_2= factor(cleanup_2,levels=c(0.33,0.5,1,2),labels=c("every 3 years","every other year","every year","twice per year")))%>%#is this labelling correct?
dplyr::mutate(cleanup_3=as.numeric(cleanup_3))%>%
dplyr::mutate(cleanup_3= factor(cleanup_3,levels=c(0.33,0.5,1,2),labels=c("every 3 years","every other year","every year","twice per year")))#is this labelling correct?
#5* 3=15
#CONVERT NON ORDERED VARIABLES TO DUMMIES
###IMP!! TO MANAGE ATTITUDINAL VARS#DELETE idk
freq(database$cleanup_responsibility)#NOTE ind not answering Idk or other are only 63.47, if wanted to use this sample will be significantly reduced
database[,"cleanup_resp_busi"]<-ifelse(database$cleanup_responsibility=="Businesses",1,0)
database[,"cleanup_resp_gov"]<-ifelse(database$cleanup_responsibility=="Governments",1,0)
database[,"cleanup_resp_indiv"]<-ifelse(database$cleanup_responsibility=="Individuals",1,0)
#other
freq(database$reduce_responsibility)#NOTE ind not answering Idk or other are only 63.47, if wanted to use this sample will be significantly reduced
database[,"reduce_resp_busi"]<-ifelse(database$ reduce_responsibility=="Businesses",1,0)
database[,"reduce_resp_gov"]<-ifelse(database$ reduce_responsibility=="Governments",1,0)
database[,"reduce_resp_indiv"]<-ifelse(database$reduce_responsibility=="Individuals",1,0)
#other
#scale cost?
database$fee_1<-database$fee_1/100
database$fee_2<-database$fee_2/100
database$fee_3<-database$fee_3/100
#modify variables with ranges to mid point?
str(database$age)
database<-database%>%
dplyr::mutate(age=as.character(age))
database$age[database$age == "1"] <- (1+17)/2
database$age[database$age == "2"] <- (18+20)/2
database$age[database$age == "3"] <- (21+29)/2
database$age[database$age == "4"] <- (30+39)/2
database$age[database$age == "5"]<- (40+49)/2
database$age[database$age == "6"]<- (50+59)/2
database$age[database$age == "7"]<- (60+100)/2
database<-database%>%
dplyr::mutate(age=as.numeric(age))
str(database$income)
database<-database%>%
dplyr::mutate(income=as.character(income))
database$income[database$income == "1"] <- (0+9999)/2
database$income[database$income == "2"] <- (10000+24999)/2
database$income[database$income == "3"] <- (25000+49999)/2
database$income[database$income == "4"] <- (50000+74999)/2
database$income[database$income == "5"]<- (75000+99999)/2
database$income[database$income == "6"]<- (100000+124999)/2
database$income[database$income == "7"]<- (125000+149999)/2
database$income[database$income == "8"]<- (150000+174999)/2
database$income[database$income == "9"]<- (175000+199999)/2
database$income[database$income == "10"]<- (200000+224999)/2
database$income[database$income == "11"]<- NA#Prefer not to answer
database<-database%>%
dplyr::mutate(income=as.numeric(income))
#SE dummies
names(database)
freq(database$commercial_fish)
freq(database$recreational_fish)
freq(database$not_fisherman)
freq(database$age)
database[,"agemoreav"]<-ifelse(database$age > 50,1,0)#greater than average 50# check average!!!!
test<-data.frame(database$age,database$agemoreav)
freq(as.factor(database$zipcode))
freq(database$employment)
database[,"employed"]<-ifelse(database$employment==1|database$employment==2,1,0)#Employed, working full-time & Employed, working part-time
database[,"unemployed"]<-ifelse(database$employment==3|database$employment==4|database$employment==5|database$employment==6,1,0) # Not employed,looking for work/Not employed, NOT looking for work/Retired/Disabled, not able to work
test<-data.frame(database$employment,database$unemployed,database$employed)
freq(database$children)
freq(database$education)
database[,"notgraduate"]<-ifelse(database$education==1|database$education==2|database$education==3|database$education==4,0,1)#Less than high school degree/High school degree or equivalent (e.g., GED)/Some college but no degree/Associate degree/
database[,"graduate"]<-ifelse(database$education==5|database$education==6,1,0)#Bachelor degree/Graduate degree
test<-data.frame(database$education,database$notgraduate,database$graduate)
freq(database$income)
summary(database$income)#IMPORTANT TO REDUCE SAMPLE TO DELETE NA'S
database[,"incomemoreav"]<-ifelse(database$income > 91049,1,0)#CORRECT POPULATION AVERAGE INCOME!!!
database[,"incomethousands"]<-database$income/1000
freq(database$gender)#IMPORTANT TO REDUCE SAMPLE TO DELETE NA'S
database[,"female"]<-ifelse(database$gender==1,1,0)
freq(database$householdsize)
#attribute dummies
freq(database$cleanup_1)#0.33 (every three years), 0.5 (every two years), 1 (once), 2 (twice)
database[,"clean_up_every3_1" ]<- ifelse(database$cleanup_1=="every 3 years",1,0)
database[,"clean_up_every2_1" ]<- ifelse(database$cleanup_1=="every other year",1,0)
database[,"clean_up_every1_1" ]<- ifelse(database$cleanup_1=="every year",1,0)
database[,"clean_up_2yearly_1"]<- ifelse(database$cleanup_1=="twice per year",1,0)
database[,"clean_up_every3_2" ]<- ifelse(database$cleanup_2=="every 3 years",1,0)
database[,"clean_up_every2_2" ]<- ifelse(database$cleanup_2=="every other year",1,0)
database[,"clean_up_every1_2" ]<- ifelse(database$cleanup_2=="every year",1,0)
database[,"clean_up_2yearly_2"]<- ifelse(database$cleanup_2=="twice per year",1,0)
database[,"clean_up_every3_3" ]<- ifelse(database$cleanup_3=="every 3 years",1,0)
database[,"clean_up_every2_3" ]<- ifelse(database$cleanup_3=="every other year",1,0)
database[,"clean_up_every1_3" ]<- ifelse(database$cleanup_3=="every year",1,0)
database[,"clean_up_2yearly_3"]<- ifelse(database$cleanup_3=="twice per year",1,0)
# OPTIONAL: apply exclusions (use subset of data)
# example: data=subset(data,database$sex==1)
database_original<-database
freq(database$serious_plastic)
database<-subset(database,database$serious_plastic!=5)#-2.26
#database<-subset(database,database$serious_plastic!="I don't know")#-2.26
freq(database$reduceplastic)#delete NA
database<-subset(database,!is.na(database$reduceplastic))
#freq(database$cleanup_responsibility)
#database<-subset(database,database$cleanup_responsibility!="I don't know") #-19.21-17.33
#database<-subset(database,database$cleanup_responsibility!="Other (please specify)") #-19.21-17.33
#freq(database$reduce_responsibility)
#database<-subset(database,database$reduce_responsibility!="I don't know") #-34.27-9.98
#database<-subset(database,database$reduce_responsibility!="Other (please specify)") #-34.27-9.98
#freq(database$income)
#database<-subset(database,database$income!= "NA")#-12.05%
#freq(database$gender)
#database<-subset(database,database$gender!= 3)#OTHER-2.45%
##modify all attitudinal variables# CHECK CODING!!!
#database<-database%>%
# dplyr::mutate(serious_plastic=as.numeric(serious_plastic))%>%#is this labelling correct?
# dplyr::mutate(serious_plastic=factor(serious_plastic,levels=c(1,2,3,4),labels=c("Very serious","Serious","Not very serious","Not at all serious")))%>%#is this labelling correct?
# dplyr::mutate(serious_plastic=ordered(serious_plastic,levels=c("Very serious","Serious","Not very serious","Not at all serious")))%>%
# dplyr::mutate(serious_plastic=as.numeric(serious_plastic))#is this labelling correct?
###CHANGE CODING AS THERE IS MISSING DATA IN CONCERNED VARIABLE AND OTHERWISE THE ORDER MODEL DOES NOT WORK!!!! BE CAREFUL IN INTREPETETION!
database$concerned[database$concerned == 1] <- 1
database$concerned[database$concerned == 3] <- 2
database$concerned[database$concerned == 4] <- 3
database$concerned[database$concerned == 5] <- 4
freq(database$concerned)
freq(database$reduceplastic)
freq(database$serious_plastic)
str(database$concerned)
str(database$reduceplastic)
str(database$serious_plastic)
####################
#continous treatment
####################
### Subtract mean of indicator variables to centre them on zero
database$cleanup_resp_busi=database$cleanup_resp_busi-mean(database$cleanup_resp_busi, na.rm=TRUE)
database$reduce_resp_busi=database$reduce_resp_busi-mean(database$reduce_resp_busi, na.rm=TRUE)
database$cleanup_resp_gov=database$cleanup_resp_gov-mean(database$cleanup_resp_gov, na.rm=TRUE)
database$reduce_resp_gov=database$reduce_resp_gov-mean(database$reduce_resp_gov, na.rm=TRUE)
database$cleanup_resp_indiv=database$cleanup_resp_indiv-mean(database$cleanup_resp_indiv, na.rm=TRUE)
database$reduce_resp_indiv=database$reduce_resp_indiv-mean(database$reduce_resp_indiv, na.rm=TRUE)
# ################################################################# #
#### DEFINE MODEL PARAMETERS ####
# ################################################################# #
### Vector of parameters, including any that are kept fixed in estimation
###clean as dummy
#PRIORS FROM REDUCED MODEL
apollo_beta=c(
-3.0789,
-1.8075,
-2.0553,
-1.5636,
-1.8525,
-2.2608,
-0.6595,
-7.1118,
-1.1066,
1.2537,
0.1437,
-0.5665,
-0.4155,
-0.5823,
rep(0,21),
rep(0,8),
rep(0,6),
-1,
0,
1,
-1,
-0.5,
0.5,
1
)
names(apollo_beta)<-c("mu_asc1",
"v_mu_b_rep",
"v_mu_b_buyb",
"v_mu_b_ce2y",
"v_mu_b_ce1y",
"v_mu_b_c2py",
"mu_b_fee_log",
"sig_asc1",
"sig_b_rep",
"sig_b_buyb",
"sig_b_ce2y",
"sig_b_ce1y",
"sig_b_c2py",
"sig_b_fee",
#
"LV_BUS_asc1",
"LV_BUS_rep",
"LV_BUS_buyb",
"LV_BUS_ce2y",
"LV_BUS_ce1y",
"LV_BUS_c2py",
"LV_BUS_fee",
"LV_GOV_asc1",
"LV_GOV_rep",
"LV_GOV_buyb",
"LV_GOV_ce2y",
"LV_GOV_ce1y",
"LV_GOV_c2py",
"LV_GOV_fee",
"LV_IND_asc1",
"LV_IND_rep",
"LV_IND_buyb",
"LV_IND_ce2y",
"LV_IND_ce1y",
"LV_IND_c2py",
"LV_IND_fee",
#
"zeta_BUS_cleanup_resp_busi",
"zeta_BUS_reduce_resp_busi",
"zeta_GOV_cleanup_resp_gov",
"zeta_GOV_reduce_resp_gov",
"zeta_IND_cleanup_resp_indiv",
"zeta_IND_reduce_resp_indiv",
"zeta_IND_concerned",
"zeta_IND_reduceplastic",
#
"sigma_BUS_cleanup_resp_busi",
"sigma_BUS_reduce_resp_busi",
"sigma_GOV_cleanup_resp_gov",
"sigma_GOV_reduce_resp_gov",
"sigma_IND_cleanup_resp_indiv",
"sigma_IND_reduce_resp_indiv",
"tau_IND_concerned_1",
"tau_IND_concerned_2",
"tau_IND_concerned_3",
"tau_IND_reduceplastic_1",
"tau_IND_reduceplastic_2",
"tau_IND_reduceplastic_3",
"tau_IND_reduceplastic_4")
### Vector with names (in quotes) of parameters to be kept fixed at their starting value in apollo_beta, use apollo_beta_fixed = c() if none
apollo_fixed = c()
# ################################################################# #
#### DEFINE RANDOM COMPONENTS ####
# ################################################################# #
### Set parameters for generating draws
apollo_draws = list(
interDrawsType = "sobol",
interNDraws =50,
interUnifDraws = c(),
interNormDraws = c("eta_BUS","eta_GOV","eta_IND","draws_asc1","draws_rep","draws_buyb","draws_ce2y","draws_ce1y","draws_c2py","draws_fee"),
intraDrawsType = "sobol",
intraNDraws = 0,
intraUnifDraws = c(),
intraNormDraws = c()
)
### Create random parameters
apollo_randCoeff = function(apollo_beta, apollo_inputs){
randcoeff = list()
randcoeff[["asc1"]] = mu_asc1 +sig_asc1 *draws_asc1
randcoeff[["b_rep"]] = v_mu_b_rep +sig_b_rep *draws_rep
randcoeff[["b_buyb"]] = v_mu_b_buyb +sig_b_buyb *draws_buyb
randcoeff[["b_ce2y"]] = v_mu_b_ce2y +sig_b_ce2y *draws_ce2y
randcoeff[["b_ce1y"]] = v_mu_b_ce1y +sig_b_ce1y *draws_ce1y
randcoeff[["b_c2py"]] = v_mu_b_c2py +sig_b_c2py *draws_c2py
randcoeff[["b_fee"]] = -exp( mu_b_fee_log +sig_b_fee *draws_fee)
randcoeff[["LV_BUS"]] = eta_BUS
randcoeff[["LV_GOV"]] = eta_GOV
randcoeff[["LV_IND"]] = eta_IND
return(randcoeff)
}
# ################################################################# #
#### GROUP AND VALIDATE INPUTS ####
# ################################################################# #
apollo_inputs = apollo_validateInputs()
# ################################################################# #
#### DEFINE MODEL AND LIKELIHOOD FUNCTION ####
# ################################################################# #
apollo_probabilities=function(apollo_beta, apollo_inputs, functionality="estimate"){
### Function initialisation: do not change the following three commands
### Attach inputs and detach after function exit
apollo_attach(apollo_beta, apollo_inputs)
on.exit(apollo_detach(apollo_beta, apollo_inputs))
### Create list of probabilities P
P = list()
### Likelihood of indicators
normalDensity_settings1 = list(outcomeNormal=cleanup_resp_busi,
xNormal=zeta_BUS_cleanup_resp_busi*LV_BUS,
mu=0,
sigma=sigma_BUS_cleanup_resp_busi,
rows=(task==1),
componentName="indic_BUS_cleanup_resp_busi")
normalDensity_settings2 = list(outcomeNormal=reduce_resp_busi,
xNormal=zeta_BUS_reduce_resp_busi*LV_BUS,
mu=0,
sigma=sigma_BUS_reduce_resp_busi,
rows=(task==1),
componentName="indic_BUS_reduce_resp_busi")
normalDensity_settings3 = list(outcomeNormal=cleanup_resp_gov,
xNormal=zeta_GOV_cleanup_resp_gov*LV_GOV,
mu=0,
sigma=sigma_BUS_cleanup_resp_busi,
rows=(task==1),
componentName="indic_GOV_cleanup_resp_gov")
normalDensity_settings4 = list(outcomeNormal=reduce_resp_gov,
xNormal=zeta_GOV_reduce_resp_gov*LV_GOV,
mu=0,
sigma=sigma_BUS_reduce_resp_busi,
rows=(task==1),
componentName="indic_GOV_reduce_resp_gov")
normalDensity_settings5 = list(outcomeNormal=cleanup_resp_indiv,
xNormal=zeta_IND_cleanup_resp_indiv*LV_IND,
mu=0,
sigma=sigma_IND_cleanup_resp_indiv,
rows=(task==1),
componentName="indic_IND_cleanup_resp_indiv")
normalDensity_settings6 = list(outcomeNormal=reduce_resp_indiv,
xNormal=zeta_IND_reduce_resp_indiv*LV_IND,
mu=0,
sigma=sigma_IND_reduce_resp_indiv,
rows=(task==1),
componentName="indic_IND_reduce_resp_indiv")
#freq(database$concerned)
ol_settings1 = list(outcomeOrdered=concerned,
V=zeta_IND_concerned*LV_IND,
tau=c(tau_IND_concerned_1,
tau_IND_concerned_2,
tau_IND_concerned_3),
rows=(task==1),
componentName="indic_IND_concerned")
#freq(database$reduceplastic)
ol_settings2 = list(outcomeOrdered=reduceplastic,
V=zeta_IND_reduceplastic*LV_IND,
tau=c(tau_IND_reduceplastic_1,
tau_IND_reduceplastic_2,
tau_IND_reduceplastic_3,
tau_IND_reduceplastic_4),
rows=(task==1),
componentName="indic_IND_reduceplastic")
#NORMAL
P[["indic_BUS_cleanup_resp_busi"]] = apollo_normalDensity(normalDensity_settings1, functionality)
P[["indic_BUS_reduce_resp_busi"]] = apollo_normalDensity(normalDensity_settings2, functionality)
P[["indic_GOV_cleanup_resp_gov"]] = apollo_normalDensity(normalDensity_settings3, functionality)
P[["indic_GOV_reduce_resp_gov"]] = apollo_normalDensity(normalDensity_settings4, functionality)
P[["indic_IND_cleanup_resp_indiv"]] = apollo_normalDensity(normalDensity_settings5, functionality)
P[["indic_IND_reduce_resp_indiv"]] = apollo_normalDensity(normalDensity_settings6, functionality)
#ORDINAL
P[["indic_IND_concerned"]] = apollo_ol(ol_settings1, functionality)
P[["indic_IND_reduceplastic"]] = apollo_ol(ol_settings2, functionality)
#create interacted terms
asc1_LV= LV_BUS_asc1*LV_BUS+LV_GOV_asc1*LV_GOV+ LV_IND_asc1*LV_IND +asc1
rep_LV = LV_BUS_rep*LV_BUS+ LV_GOV_rep*LV_GOV+ LV_IND_rep*LV_IND +b_rep
buyb_LV = LV_BUS_buyb*LV_BUS+LV_GOV_buyb*LV_GOV+ LV_IND_buyb*LV_IND +b_buyb
ce2y_LV = LV_BUS_ce2y*LV_BUS+LV_GOV_ce2y*LV_GOV+ LV_IND_ce2y*LV_IND +b_ce2y
ce1y_LV = LV_BUS_ce1y*LV_BUS+LV_GOV_ce1y*LV_GOV+ LV_IND_ce1y*LV_IND +b_ce1y
c2py_LV = LV_BUS_c2py*LV_BUS+LV_GOV_c2py*LV_GOV+ LV_IND_c2py*LV_IND +b_c2py
#fee_LV = LV_BUS_fee*LV_BUS+ LV_GOV_fee*LV_GOV+ LV_IND_fee*LV_IND +b_fee
### List of utilities: these must use the same names as in mnl_settings, order is irrelevant
V = list()
V[['alt1']]= LV_BUS_fee*LV_BUS+ LV_GOV_fee*LV_GOV+ LV_IND_fee*LV_IND+b_fee*(fee_1+rep_LV*reporting_1+buyb_LV*buyback_1+ce2y_LV*every2_1+ce1y_LV*every1_1+c2py_LV*twoyearly_1)
V[['alt2']]= LV_BUS_fee*LV_BUS+ LV_GOV_fee*LV_GOV+ LV_IND_fee*LV_IND+b_fee*(fee_2+rep_LV*reporting_2+buyb_LV*buyback_2+ce2y_LV*every2_2+ce1y_LV*every1_2+c2py_LV*twoyearly_2)
V[['sq']] = asc1_LV+LV_BUS_fee*LV_BUS+ LV_GOV_fee*LV_GOV+ LV_IND_fee*LV_IND+b_fee*(fee_3+rep_LV*reporting_3+buyb_LV*buyback_3+ce2y_LV*every2_3+ce1y_LV*every1_3+c2py_LV*twoyearly_3)
### Define settings for MNL model component
mnl_settings = list(
alternatives = c(alt1=1, alt2 = 2, sq = 3),
avail = 1,
choiceVar = choice,
V = V
)
### Compute probabilities using MNL model
P[['model']] = apollo_mnl(mnl_settings, functionality)
### Take product across observation for same individual
P = apollo_panelProd(P, apollo_inputs, functionality)
### Average across inter-individual draws
P = apollo_avgInterDraws(P, apollo_inputs, functionality)
### Prepare and return outputs of function
P = apollo_prepareProb(P, apollo_inputs, functionality)
return(P)
}
# ################################################################# #
#### MODEL ESTIMATION ####
# ################################################################# #
#estimate_settings=list(maxIterations=200,silent=TRUE)
model = apollo_estimate(apollo_beta, apollo_fixed,
apollo_probabilities, apollo_inputs, estimate_settings=list(hessianRoutine="maxLik"))
# ################################################################# #
#### MODEL OUTPUTS ####
# ################################################################# #
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO SCREEN) ----
# ----------------------------------------------------------------- #
apollo_modelOutput(model)
# ----------------------------------------------------------------- #
#---- FORMATTED OUTPUT (TO FILE, using model name) ----
# ----------------------------------------------------------------- #
apollo_saveOutput(model)
conditionals <- apollo_conditionals(model,apollo_probabilities, apollo_inputs)
write.csv(conditionals,paste(model$apollo_control$modelName,"_conditionals.csv",sep=""))
########################################################
OUTPUT!!!!!!!!!!!!!!!
#######################################################
Model run using Apollo for R, version 0.0.8
www.ApolloChoiceModelling.com
Model name : ApolloHXL_cfix_cleandummy_logc_wtp_scaledcost_searchprior
Model description : Mixed logit model, uncorrelated Lognormals in utility space
Model run at : 2020-08-21 10:29:31
Estimation method : bfgs
Model diagnosis : successful convergence
Number of individuals : 518
Number of observations : 2590
Number of cores used : 3
Number of inter-person draws : 50 (sobol)
LL(start) : -1945.271
LL(0) : -2845.406
LL(final, whole model) : -1900.564
LL(indic_BUS_cleanup_resp_busi): -Inf
LL(indic_BUS_reduce_resp_busi) : -Inf
LL(indic_GOV_cleanup_resp_gov) : -Inf
LL(indic_GOV_reduce_resp_gov) : -Inf
LL(indic_IND_cleanup_resp_indiv): -Inf
LL(indic_IND_reduce_resp_indiv): -Inf
LL(indic_IND_concerned) : -720.5025
LL(indic_IND_reduceplastic) : -943.4589
Rho-square (0) : 0.3321
Adj.Rho-square (0) : 0.3124
AIC : 3913.13
BIC : 4241.26
Estimated parameters : 56
Time taken (hh:mm:ss) : 00:11:47.25
Iterations : 72
Estimates:
Estimate Std.err. t.ratio(0) Rob.std.err. Rob.t.ratio(0)
mu_asc1 -4.2054 Inf 0 NaN NaN
v_mu_b_rep -1.8305 Inf 0 NaN NaN
v_mu_b_buyb -2.0592 Inf 0 NaN NaN
v_mu_b_ce2y -1.6517 Inf 0 NaN NaN
v_mu_b_ce1y -2.0088 Inf 0 NaN NaN
v_mu_b_c2py -2.3974 Inf 0 NaN NaN
mu_b_fee_log -0.4464 Inf 0 NaN NaN
sig_asc1 -8.5496 Inf 0 NaN NaN
sig_b_rep -0.9428 Inf 0 NaN NaN
sig_b_buyb 0.1494 Inf 0 NaN NaN
sig_b_ce2y -0.2216 Inf 0 NaN NaN
sig_b_ce1y 0.2427 Inf 0 NaN NaN
sig_b_c2py 0.0388 Inf 0 NaN NaN
sig_b_fee 0.5566 Inf 0 NaN NaN
LV_BUS_asc1 -1.3166 Inf 0 NaN NaN
LV_BUS_rep -0.6916 Inf 0 NaN NaN
LV_BUS_buyb 1.2255 Inf 0 NaN NaN
LV_BUS_ce2y -0.7742 Inf 0 NaN NaN
LV_BUS_ce1y -0.4056 Inf 0 NaN NaN
LV_BUS_c2py -0.7979 Inf 0 NaN NaN
LV_BUS_fee 0.0000 Inf 0 NaN NaN
LV_GOV_asc1 0.1955 Inf 0 NaN NaN
LV_GOV_rep 0.0729 Inf 0 NaN NaN
LV_GOV_buyb 0.0987 Inf 0 NaN NaN
LV_GOV_ce2y 1.5536 Inf 0 NaN NaN
LV_GOV_ce1y 1.1164 Inf 0 NaN NaN
LV_GOV_c2py 2.0073 Inf 0 NaN NaN
LV_GOV_fee 0.0000 Inf 0 NaN NaN
LV_IND_asc1 0.2365 Inf 0 NaN NaN
LV_IND_rep 0.6605 Inf 0 NaN NaN
LV_IND_buyb 0.3350 Inf 0 NaN NaN
LV_IND_ce2y 0.1461 Inf 0 NaN NaN
LV_IND_ce1y 1.8795 Inf 0 NaN NaN
LV_IND_c2py 1.1977 Inf 0 NaN NaN
LV_IND_fee 0.0000 Inf 0 NaN NaN
zeta_BUS_cleanup_resp_busi 0.0000 Inf 0 NaN NaN
zeta_BUS_reduce_resp_busi 0.0000 Inf 0 NaN NaN
zeta_GOV_cleanup_resp_gov 0.0000 Inf 0 NaN NaN
zeta_GOV_reduce_resp_gov 0.0000 Inf 0 NaN NaN
zeta_IND_cleanup_resp_indiv 0.0000 Inf 0 NaN NaN
zeta_IND_reduce_resp_indiv 0.0000 Inf 0 NaN NaN
zeta_IND_concerned 0.0000 Inf 0 NaN NaN
zeta_IND_reduceplastic 0.0000 Inf 0 NaN NaN
sigma_BUS_cleanup_resp_busi 0.0000 Inf 0 NaN NaN
sigma_BUS_reduce_resp_busi 0.0000 Inf 0 NaN NaN
sigma_GOV_cleanup_resp_gov 0.0000 Inf 0 NaN NaN
sigma_GOV_reduce_resp_gov 0.0000 Inf 0 NaN NaN
sigma_IND_cleanup_resp_indiv 0.0000 Inf 0 NaN NaN
sigma_IND_reduce_resp_indiv 0.0000 Inf 0 NaN NaN
tau_IND_concerned_1 -1.0000 Inf 0 NaN NaN
tau_IND_concerned_2 0.0000 Inf 0 NaN NaN
tau_IND_concerned_3 1.0000 Inf 0 NaN NaN
tau_IND_reduceplastic_1 -1.0000 Inf 0 NaN NaN
tau_IND_reduceplastic_2 -0.5000 Inf 0 NaN NaN
tau_IND_reduceplastic_3 0.5000 Inf 0 NaN NaN
tau_IND_reduceplastic_4 1.0000 Inf 0 NaN NaN
Summary statistics for dependent variable for Normal
density model component: indic_BUS_cleanup_resp_busi
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.1892 -0.1892 -0.1892 0.0000 -0.1892 0.8108
Summary statistics for dependent variable for Normal
density model component: indic_BUS_reduce_resp_busi
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.3417 -0.3417 -0.3417 0.0000 0.6583 0.6583
Summary statistics for dependent variable for Normal
density model component: indic_GOV_cleanup_resp_gov
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.2876 -0.2876 -0.2876 0.0000 0.7124 0.7124
Summary statistics for dependent variable for Normal
density model component: indic_GOV_reduce_resp_gov
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.3861 -0.3861 -0.3861 0.0000 0.6139 0.6139
Summary statistics for dependent variable for Normal
density model component: indic_IND_cleanup_resp_indiv
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.2799 -0.2799 -0.2799 0.0000 0.7201 0.7201
Summary statistics for dependent variable for Normal
density model component: indic_IND_reduce_resp_indiv
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.1525 -0.1525 -0.1525 0.0000 -0.1525 0.8475
Overview of choices for OL model component: indic_IND_concerned
1 2 3 4
Times chosen 24 36 229 229
Percentage chosen overall 5 7 44 44
Overview of choices for OL model component: indic_IND_reduceplastic
1 2 3 4 5
Times chosen 11 26 41 260 180
Percentage chosen overall 2 5 8 50 35
Overview of choices for MNL model component: model
alt1 alt2 sq
Times available 2590.00 2590.00 2590.0
Times chosen 1018.00 940.00 632.0
Percentage chosen overall 39.31 36.29 24.4
Percentage chosen when available 39.31 36.29 24.4
Warning: Availability not provided to 'apollo_mnl' (or some elements are NA).
Full availability assumed.