Author/Judge's Commentary 367 Author/Judge's Commentary The Outstanding Ai Irline Overbooking Papers William P. Fox Dept of Mathematics francis Marion University Florence. SC 29501 bfoxfmarion edu Introduction Once again, Problem B proved to be a bigger challenge than originally considered, both for the students and the judges The students had a wealth of information for the basic model from the Web and from other resources. Students could consider and refine the basic information to fit the proposed post 9-11 scenario The judges had to read and evaluate many diverse (yet sometimes similar pproaches in order to find the"" papers. Judges found mistakes--errors in modeling, assumptions, mathematics, and/or analysis--even in these"best papers; so it is important to note that"best"does not mean perfect. The judges must read and apply their own subjective analysis to evaluate critically both the technical and expository solutions presented by the teams No paper analyzed every element nor applied critical validation and sen sitivity analysis to all aspects of their model. Judges found many papers with the exact same model (down to the exact same letters used for the variables)and none of these clearly cited the universal source anywhere in the submission. The failure te properly credit the original source critically hurt these papers; it was obvious their basic model was not theirs but came from a published source Advice At the conclusion of the judging, the judges offered the following comments The LIMAP Journal23(3)(2002)367-372. Copyright 2002 by COMAP, Inc. All rights reserved Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP
Author/Judge’s Commentary 367 Author/Judge’s Commentary: The Outstanding Airline Overbooking Papers William P. Fox Dept. of Mathematics Francis Marion University Florence, SC 29501 bfox@fmarion.edu Introduction Once again, Problem B proved to be a bigger challenge than originally considered, both for the students and the judges. The students had a wealth of information for the basic model from the Web and from other resources. Students could consider and refine the basic information to fit the proposed post 9-11 scenario. The judges had to read and evaluate many diverse (yet sometimes similar) approaches in order to find the “best” papers. Judges found mistakes—errors in modeling, assumptions, mathematics, and/or analysis—even in these “best” papers; so it is important to note that “best” does not mean perfect. The judges must read and apply their own subjective analysis to evaluate critically both the technical and expository solutions presented by the teams. No paper analyzed every element nor applied critical validation and sensitivity analysis to all aspects of their model. Judges found many papers with the exact same model (down to the exact same letters used for the variables) and none of these clearly cited the universal source anywhere in the submission. The failure to properly credit the original source critically hurt these papers; it was obvious their basic model was not theirs but came from a published source. Advice At the conclusion of the judging, the judges offered the following comments: The UMAP Journal 23 (3) (2002) 367–372. c Copyright 2002 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP
368 The UMAP Journal 23.3(2002) Follow the instructions Clearly answer all parts List all assumptions that affect the model and justify your use of those Make sure that your conclusions and results are clearly stated In the summary, put the"bottom line and managerial recommendation results--not a chronological description of what you di Restate the problem in your word A CEO memorandum succinic Include"bottom line and managerial results"answers Do aclude methods d o ation Clarity and Style Use a clear style and do not ramble a table of contents is very helpful to the judges Pictures, tables, and graphs are helpful; but you must explain them Do not include a picture, table, or graph that is extraneous to your model Do not be verbose since judges have only limited time to read and Develop your model-do not just provide a laundry list of possible mod Start with a simple model and then refine it Computer Programs If a program is included, clearly define all parameters Always include an algorithm in the body of the paper for any code used If running a Monte Carlo simulation, be sure to run it enough times to have a statistically significant output . alidation Check your model against some known baseline Check sensitivity of parameters to your results Check to see if your recommendation/ conclusions make common sense
368 The UMAP Journal 23.3 (2002) • Follow the instructions – Clearly answer all parts. – List all assumptions that affect the model and justify your use of those assumptions. – Make sure that your conclusions and results are clearly stated. – In the summary, put the “bottom line and managerial recommendation” results—not a chronological description of what you did. – Restate the problem in your words. • A CEO memorandum – Be succinct. – Include “bottom line and managerial results” answers. – Do not include methods used or equations. • Clarity and Style – Use a clear style and do not ramble. – A table of contents is very helpful to the judges. – Pictures, tables, and graphs are helpful; but you must explain them clearly. – Do not include a picture, table, or graph that is extraneous to your model or analysis. – Do not be verbose, since judges have only limited time to read and evaluate your paper. • The Model – Develop your model—do not just provide a laundry list of possible models. – Start with a simple model and then refine it. • Computer Programs – If a program is included, clearly define all parameters. – Always include an algorithm in the body of the paper for any code used. – If running a Monte Carlo simulation, be sure to run it enough times to have a statistically significant output. • Validation – Check your model against some known baseline. – Check sensitivity of parameters to your results. – Check to see if your recommendation/conclusions make common sense
Author/udge's Commentary 369 The model should represent human behavior and be plausible all work needs to be al or referenced, a reference list at the end not sufficient Teams can only use inanimate resources--no real people or people con sulted over the Internet Surf the web but document sites where obtained information is used This problem lent itself to a literature search but few teams did one Summar This is the first piece of information read by a judge It should be well written and contain the bottom-line answer or result This summary should motivate the judge to read your paper to see how you obtained your results Judging The judging is accomplished in two phases. Phase I, at a different site, is triage judging. These are generally only 10-minute reads with a subjective scoring from 1(worst)to 7(best). Approximately the top 50% of papers are sent on the final judging Phase II is done with different judges and consists of a calibration and another subjection round based on the 1-7 scoring system. Then the g collaborate to develop a 100-point scale to enable them to"bubble up"the better papers. Four or more longer rounds are accomplished using this scale, followed by a lengthy discussion of the last final group of papers Reflections of Triage Lots of good papers made it to the final judging The initial summary made a significant difference in the papers(results ver- Report to the ceo also made a significant difference in papers
Author/Judge’s Commentary 369 – Use real data. – The model should represent human behavior and be plausible. • Resources – All work needs to be original or referenced; a reference list at the end is not sufficient! – Teams can only use inanimate resources—no real people or people consulted over the Internet. – Surf the web but document sites where obtained information is used. – This problem lent itself to a literature search, but few teams did one. • Summary – This is the first piece of information read by a judge. It should be well written and contain the bottom-line answer or result. – This summary should motivate the judge to read your paper to see how you obtained your results. Judging The judging is accomplished in two phases. Phase I, at a different site, is “triage judging.” These are generally only 10-minute reads with a subjective scoring from 1 (worst) to 7 (best). Approximately the top 50% of papers are sent on the final judging. Phase II is done with different judges and consists of a calibration round and another subjection round based on the 1–7 scoring system. Then the judges collaborate to develop a 100-point scale to enable them to “bubble up” the better papers. Four or more longer rounds are accomplished using this scale, followed by a lengthy discussion of the last final group of papers. Reflections of Triage • Lots of good papers made it to the final judging. • The initial summary made a significant difference in the papers(results versus an explanation). • Report to the CEO also made a significant difference in papers
370 The UMAP Journal 23.3 (2002) Triage and Final judges'Pet Peeves Tables with columns headed with Greek letters or acronyms that are not immediately understood Definition and variable lists that are imbedded in a paragraph Equations used without explaining terms and what the equation accom- Copying derivations from other sources; cite the reference and briefly explain is a better approach. Approaches by the Outstanding Papers Six papers were selected as Outstanding submissions because they developed a workable, realistic model from their assumptions that could have been used to answer all elements made clear recommendations wrote a clear and understandable paper describing the problem, their model, and results, and handled all the elements The required elements, as viewed by the judges, were to develop a basic overbooking model that enabled one to find optimal values, consider alternative strategies for handling overbooked passengers, reflect on post-9-11 issues, and contain the CEO report of finding and analysis Most of the better papers did an extensive literature and Web search concern ing overbooking by airlines and used this information in their model building The poorest section in all papers, including many of the Outstanding papers, was e section on assumptions with rational justification Many papers just skipped this section and went directly from the problem to model-building Most papers used a stochastic approach for their model. With interarrival times assumed to be exponential, a Poisson process was often used to model passengers. Teams moved quickly from the Poisson to a binomial distribution with p and 1-prepresenting"shows"and"no-shows"for ticketholders. Many
370 The UMAP Journal 23.3 (2002) Triage and Final Judges’ Pet Peeves • Tables with columns headed with Greek letters or acronyms that are not immediately understood. • Definition and variable lists that are imbedded in a paragraph. • Equations used without explaining terms and what the equation accomplished. • Copying derivations from other sources; cite the reference and briefly explain is a better approach. Approaches by the Outstanding Papers Six papers were selected as Outstanding submissions because they: • developed a workable, realistic model from their assumptions that could have been used to answer all elements; • made clear recommendations; • wrote a clear and understandable paper describing the problem, their model, and results; and • handled all the elements. The required elements, as viewed by the judges, were to • develop a basic overbooking model that enabled one to find optimal values, • consider alternative strategies for handling overbooked passengers, • reflect on post-9-11 issues, and • contain the CEO report of finding and analysis. Most of the better papers did an extensive literature and Web search concerning overbooking by airlines and used this information in their model building. The poorest section in all papers, including many of the Outstanding papers, was the section on assumptions with rational justification. Many papers just skipped this section and went directly from the problem to model-building! Most papers used a stochastic approach for their model. With interarrival times assumed to be exponential, a Poisson process was often used to model passengers. Teams moved quickly from the Poisson to a binomial distribution with p and 1−p representing “shows” and “no-shows” for ticketholders. Many
Author/Judge's Commentary 371 teams started directly with the binomial distribution without loss of continu ity. Some teams went on to use the normal approximation to the binomial Revenues were generally calculated using some sort of"expected value"equa tion. Some teams built nonlinear optimization models which was a nice and different approach Teams usually started with a simple example: a single plane with a fixed cost and capacity, one ticket price, and a reasonable value for no-shows based on historical data. This then became a model from which teams could build refinements (not only to their parameters)but also to include the changes based on post-9-11 Teams often simulated these results using the computer and then made sense of the simulation by summarizing the results Nake Forest had two Outstanding papers. Team 69, with their paper entitled ACE is High, was the INFORMS winner because of its superior analysis Both papers began using a binomial approach as their base model. Team 273 developed a single-plane model, a 2-plane model, and generalized to an n- plane model. Team 69 did a superb job in maximizing revenue after examining alternatives and varying their parameters The Harvey Mudd team, the MAA winner, had-by far-the best literature search. They used it to discuss existing models to determine if any could be used for post 9-11. Their research examined many of the current overbooking models that could be adapted to the situation. The University of Colorado team used Frontier Airlines as their airlines They began with the binomial random variable approach, with revenues be ing expected values. They modeled both linear and nonlinear compensation plans for bumped passengers. They developed an auction-style model using Chebyshevs weighting distribution. They also consider time-dependency in their model The Duke University team, the SIaM winner, had an excellent mix of liter- ature search material and development of their own models. They too began with a basic binomial model. They considered multiple fares and related each post-9-11 issue to parameters in their model. They varied their parameters and provided many key insights to the overbooking problem. This paper was the first paper in a long time to receive an Outstanding from judges who had read distribution as their probability distribution and then put together an expected mean Gh their pap Bill. whai The Bethel College team built a risk assessment model. They used a normal value model for revenue. Their analysis of Vanguard Airlines with a plane capacity of 130 passengers was done well Most papers found an"optimal"overbooking strategy to be to overbook between 9% and 15%, and they used these numbers to find"optimal"revenu for the airlines. Many teams tried alternative strategies for compensation, and some even considered the different classes of seats on an airplane All teams and their advisors are commended for the efforts on the airline Overbooking Problem
Author/Judge’s Commentary 371 teams started directly with the binomial distribution without loss of continuity. Some teams went on to use the normal approximation to the binomial. Revenues were generally calculated using some sort of “expected value” equation. Some teams built nonlinear optimization models, which was a nice and different approach. Teams usually started with a simple example: a single plane with a fixed cost and capacity, one ticket price, and a reasonable value for no-shows based on historical data. This then became a model from which teams could build refinements (not only to their parameters) but also to include the changes based on post-9-11. Teams often simulated these results using the computer and then made sense of the simulation by summarizing the results. Wake Forest had two Outstanding papers. Team 69, with their paper entitled “ACE is High,” was the INFORMS winner because of its superior analysis. Both papers began using a binomial approach as their base model. Team 273 developed a single-plane model, a 2-plane model, and generalized to an nplane model. Team 69 did a superb job in maximizing revenue after examining alternatives and varying their parameters. The Harvey Mudd team, the MAA winner, had—by far—the best literature search. They used it to discuss existing models to determine if any could be used for post 9-11. Their research examined many of the current overbooking models that could be adapted to the situation. The University of Colorado team used Frontier Airlines as their airlines. They began with the binomial random variable approach, with revenues being expected values. They modeled both linear and nonlinear compensation plans for bumped passengers. They developed an auction-style model using Chebyshev’s weighting distribution. They also consider time-dependency in their model. The Duke University team, the SIAM winner, had an excellent mix of literature search material and development of their own models. They too began with a basic binomial model. They considered multiple fares and related each post-9-11 issue to parameters in their model. They varied their parameters and provided many key insights to the overbooking problem. This paper was the first paper in a long time to receive an Outstanding from judges who had read their paper. Bill, what does this mean?? The Bethel College team built a risk assessment model. They used a normal distribution as their probability distribution and then put together an expected value model for revenue. Their analysis of Vanguard Airlines with a plane capacity of 130 passengers was done well. Most papers found an “optimal” overbooking strategy to be to overbook between 9% and 15%, and they used these numbers to find “optimal” revenues for the airlines. Many teams tried alternative strategies for compensation, and some even considered the different classes of seats on an airplane. All teams and their advisors are commended for the efforts on the Airline Overbooking Problem
372 The UMAP Journal 23.3(2002) About the author Dr William P. Fox is Professor and the Chair of the Department of Mathematics at Francis Marion Univer- sity. He received his M.S. in operations research from the Naval Postgraduate School and his Ph D in oper- ations research and industrial engineering from Clem son University. Prior to coming to Francis Marion, he was a professor in the Department of Mathematical Sciences at the United States Military Academy. He has co-authored several mathematical modeling text books and makes numerous conference presentations on mathematical modeling. He is a SIAM lecturer. He is currently the director of the High School Mathematical Contest in Modeling (HiMCM) He was a co-author of this years airline overbooking problem
372 The UMAP Journal 23.3 (2002) About the Author Dr. William P. Fox is Professor and the Chair of the Department of Mathematics at Francis Marion University. He received his M.S. in operations research from the Naval Postgraduate School and his Ph.D. in operations research and industrial engineering from Clemson University. Prior to coming to Francis Marion, he was a professor in the Department of Mathematical Sciences at the United States Military Academy. He has co-authored several mathematical modeling textbooks and makes numerous conference presentations on mathematical modeling. He is a SIAM lecturer. He is currently the director of the High School Mathematical Contest in Modeling (HiMCM). He was a co-author of this year’s airline overbooking problem