Important: Read this before posting to this forum

  1. This forum is for questions related to the use of Apollo. We will answer some general choice modelling questions too, where appropriate, and time permitting. We cannot answer questions about how to estimate choice models with other software packages.
  2. There is a very detailed manual for Apollo available at http://www.ApolloChoiceModelling.com/manual.html. This contains detailed descriptions of the various Apollo functions, and numerous examples are available at http://www.ApolloChoiceModelling.com/examples.html. In addition, help files are available for all functions, using e.g. ?apollo_mnl
  3. Before asking a question on the forum, users are kindly requested to follow these steps:
    1. Check that the same issue has not already been addressed in the forum - there is a search tool.
    2. Ensure that the correct syntax has been used. For any function, detailed instructions are available directly in Apollo, e.g. by using ?apollo_mnl for apollo_mnl
    3. Check the frequently asked questions section on the Apollo website, which discusses some common issues/failures. Please see http://www.apollochoicemodelling.com/faq.html
    4. Make sure that R is using the latest official release of Apollo.
  4. If the above steps do not resolve the issue, then users should follow these steps when posting a question:
    1. provide full details on the issue, including the entire code and output, including any error messages
    2. posts will not immediately appear on the forum, but will be checked by a moderator first. We check the forum at least twice a week. It may thus take a couple of days for your post to appear and before we reply. There is no need to submit the post multiple times.

Cross validation and Prediction

Ask questions about post-estimation functions (e.g. prediction, conditionals, etc) or other processing of results.
Post Reply
kiki
Posts: 12
Joined: 25 Mar 2024, 19:11

Cross validation and Prediction

Post by kiki »

Hi,

I am working on comparing the prediction performance of mixed logit models estimated with and without weighting calibration using the Apollo package. The weights have been applied at the task level (i.e., per row in the dataset).

I try to compare their prediction performance indicators such as accuracy, F1-score, and precision. However, I have not found a straightforward way to extract these indicators from the current outputs. The apollo_outOfSample function provides the difference in log-likelihood between the training and validation datasets, but it does not report other performance indicators or i do not get the point.

I have also attempted to use the apollo_prediction function. While it returns predicted probabilities, I noticed that:

1) The sum of average predictions at MLE does not equal 1.
2) The aggregated predictions across alternatives do not sum to the total number of observations.

Here is a sample of the output from apollo_prediction:
Aggregated prediction:
at MLE Sampled mean Sampled std.dev. Quantile 0.025 Quantile 0.975
alt1 130.7 129.9 5.039 120.7 138.7
alt2 130.8 130.0 4.899 121.5 138.6
alt3 204.3 205.9 9.935 188.5 223.6
Average prediction:
at MLE Sampled mean Sampled std.dev. Quantile 0.025 Quantile 0.975
alt1 0.06985 0.06942 0.002693 0.06453 0.07413
alt2 0.06990 0.06946 0.002618 0.06493 0.07409
alt3 0.10920 0.11007 0.005310 0.10073 0.11950

Could you kindly advise:
1) How I might obtain accuracy, F1-score, or precision for prediction evaluation in Apollo?
2) Whether these discrepancies in the prediction summaries are expected?
3) What would be the recommended way to compare predictive performance between models with and without weights?

Thank you very much for your time and assistance.

Best regards,
Kiki
stephanehess
Site Admin
Posts: 1351
Joined: 24 Apr 2020, 16:29

Re: Cross validation and Prediction

Post by stephanehess »

Hi

can you please show us the entire code?

Thanks
--------------------------------
Stephane Hess
www.stephanehess.me.uk
Post Reply