Author-Judge's Commentary 317 Judge's Commentary The Outstanding Stunt Person papers William p. Fox Dept of Mathematics francis Marion University Florence, SC 29501 bfoxofmarion. edu Introduction Once again, Problem a proved to be a nice challenge for both the students and the judges. The students did not have a wealth of information for the overall model from the Web or from other resources Students could find basic information for helping model the jumping of the elephant from many sources This problem turned out to be a"true"modeling oblem the students sumptions led to the development of their model. The judges had to read and diverse(yet sometimes similar) paches in order to find th best" papers. Judges found mistakes even in these"best" papers, so it is im- portant to note that"best"does not mean perfect. Many of these papers contain errors in modeling, assumptions, mathematics, and /or analysis. The judges must read and apply their own subjective analyses to evaluate critically both the technical and expository solutions presented by the teams No paper analyzed every element nor applied critical validation and sensi- tivity analysis to all aspects of their model Advice to Students and Advisors At the conclusion of the judging, the judges offered the following comments . Follow the instructions Clearly answer all List all assumptions that affect the model and justify your use of them The UMAP Journal 24(3)(2003)317-322. Copyright 2003 by COMAP, Inc. Allrights reserved Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial dvantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP
Author-Judge’s Commentary 317 Judge’s Commentary: The Outstanding Stunt Person Papers William P. Fox Dept. of Mathematics Francis Marion University Florence, SC 29501 bfox@fmarion.edu Introduction Once again, Problem A proved to be a nice challenge for both the students and the judges. The students did not have a wealth of information for the overall model from the Web or from other resources. Students could find basic information for helping model the jumping of the elephant from many sources. This problem turned out to be a “true” modeling problem; the students’ assumptions led to the development of their model. The judges had to read and evaluate many diverse (yet sometimes similar) approaches in order to find the “best” papers. Judges found mistakes even in these “best” papers, so it is important to note that “best” does not mean perfect. Many of these papers contain errors in modeling, assumptions, mathematics, and/or analysis. The judges must read and apply their own subjective analyses to evaluate critically both the technical and expository solutions presented by the teams. No paper analyzed every element nor applied critical validation and sensitivity analysis to all aspects of their model. Advice to Students and Advisors At the conclusion of the judging, the judges offered the following comments: • Follow the instructions – Clearly answer all parts. – List all assumptions that affect the model and justify your use of them. The UMAP Journal 24 (3) (2003) 317–322. c Copyright 2003 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP
318 The UMAP Journal 24.3(2003) Make sure that your conclusions and results are clearly stated. Any mission directives or additional questions need to be addressed Restate the problem in your words. At theend, ask yourselves the question, "Does our answer make intuitive . "Executive"Summary(Abstract)of Results The summary is the first piece of information read by a judge. It should be well written and contain the bottom-line answer or result It should motivate the judge to read your paper to see how you ob tained your results. The judges place considerable weight on the summary, and winning papers are sometimes distinguished from other papers based on the quality of the summary. To write a good summary, imagine that a reader may choose whether or not to read the body of the paper based on your summary. Thus, a summary should clearly describe your approach to the problem and, most prominently, your most important conclusions Summaries that are mere restatements of the contest problem or are cut-and-pasted boil erplate from the Introduction are generally considered weak Put the"bottom line results and managerial recommendations" in the Be succinct: do not include a discussion of methods used and do not just list a description or historical narrative of what you did Clarity and Style Use a clear style and do not ramble Do not list every possible model or method that could be used in a roblem a table of contents is elpful to the judg Pictures, tables, and graphs are helpful; but you must explain them Donot include a picture, table, or graph that is extraneous to your model Do not be overly verbose, since judges have only 10-30 min to read and our paper. Include a graphic flowchart or an algorithmic flow chart for computer Develop your model-do notjust provide a laundry list of possible mod- els even if explained
318 The UMAP Journal 24.3 (2003) – Make sure that your conclusions and results are clearly stated. Any mission directives or additional questions need to be addressed. – Restate the problem in your words. – At the end, ask yourselves the question, “Does our answer make intuitive and/or common sense?” • “Executive” Summary (Abstract) of Results The summary is the first piece of information read by a judge. It should be well written and contain the bottom-line answer or result. It should motivate the judge to read your paper to see how you obtained your results. The judges place considerable weight on the summary, and winning papers are sometimes distinguished from other papers based on the quality of the summary. To write a good summary, imagine that a reader may choose whether or not to read the body of the paper based on your summary. Thus, a summary should clearly describe your approach to the problem and, most prominently, your most important conclusions. Summaries that are mere restatements of the contest problem or are cut-and-pasted boilerplate from the Introduction are generally considered weak. – Put the “bottom line results and managerial recommendations” in the summary. – Be succinct; do not include a discussion of methods used, and do not just list a description or historical narrative of what you did. • Clarity and Style – Use a clear style and do not ramble. – Do not list every possible model or method that could be used in a problem. – A table of contents is very helpful to the judges. – Pictures, tables, and graphs are helpful; but you must explain them clearly. – Do not include a picture, table, or graph that is extraneous to your model or analysis. – Do not be overly verbose, since judges have only 10–30 min to read and evaluate your paper. – Include a graphic flowchart or an algorithmic flow chart for computer programs used/developed. • The Model – Develop your model—do not just provide a laundry list of possible models even if explained
Author-Judge's Commentary 319 Start with a simple model, follow it to completion, and then refine it Many teams built a model without air resistance and-before determin ing thenumber of boxes needed--they refined only that part of the model to include air resistance Computer Programs If computer programs are included, clearly define all parameters Always include an algorithm in the body of the paper for any code used If running a Monte Carlo simulation, be sure to run it enough times to have a statistically significant output . Validation Check your model against some known baseline, if possible, or explain how you would do this Check sensitivity of parameters to your results Check to see if your recommendations/ conclusions make common sense Use real data The model should represent human behavior and be plausible Resources All work needs to be original or else references must be cited-with specific page numbers-in the text of the paper; a reference list at the end is not sufficient. This particular problem lent itself to a good literature search Teams may use only inanimate resources. This does not include peop Surf the Web but document sites where you obtained information that you used Judging The judging is accomplished in two phases Phase I is"triage judging. These are generally only 10-min reads with a subjective scoring from 1(worst)to 7(best). Approximately the top 40% of papers are sent on to the final judging Phase Il, at a different site, is done with different judges and consists of a calibration round followed by another subjection round based on the 1 7 scoring system. Then the judges collaborate to develop an unique 100 point scale that will enable them to"bubble up"the better papers. Four or more longer rounds are accomplished using this scale, followed by a lengthy discussion of the last final group of papers
Author-Judge’s Commentary 319 – Start with a simple model, follow it to completion, and then refine it. Many teams built a model without air resistance and—before determining the number of boxes needed—they refined only that part of the model to include air resistance. • Computer Programs – If computer programs are included, clearly define all parameters. – Always include an algorithm in the body of the paper for any code used. – If running a Monte Carlo simulation, be sure to run it enough times to have a statistically significant output. • Validation – Check your model against some known baseline, if possible, or explain how you would do this. – Check sensitivity of parameters to your results. – Check to see if your recommendations/conclusions make common sense. – Use real data. – The model should represent human behavior and be plausible. • Resources – All work needs to be original or else references must be cited—with specific page numbers—in the text of the paper; a reference list at the end is not sufficient. (This particular problem lent itself to a good literature search.) – Teams may use only inanimate resources. This does not include people. – Surf the Web, but document sites where you obtained information that you used. Judging The judging is accomplished in two phases. • Phase I is “triage judging.” These are generally only 10-min reads with a subjective scoring from 1 (worst) to 7 (best). Approximately the top 40% of papers are sent on to the final judging. • Phase II, at a different site, is done with different judges and consists of a calibration round followed by another subjection round based on the 1– 7 scoring system. Then the judges collaborate to develop an unique 100- point scale that will enable them to “bubble up” the better papers. Four or more longer rounds are accomplished using this scale, followed by a lengthy discussion of the last final group of papers
320 The UMAP Journal 24.3 (2003) Reflections of the Triage Judges Lots of good papers made it to the final judging The summary made a significant difference. A majority of the summaries were poor and did not tell the reader the results obtained A large number of teams simply copied and pasted their introductory paragraphs, which have the different purpose of establishing the background for the problem The biggest thing that caught the judges' eye was whether or not the team paid attention to the questions asked in the problem. A number of teams addressed what they knew but didn't consider the real question-they got cut quickly e The next deciding factor was whether a team actually did some modeling or simply looked up a few equations and tried to shoehorn those into the prob lem. We looked for evidence that modeling had taken place; experiments were good to see A final concern was the quality of writing-some was so poor that the judge couldn 't follow or make any sense out of the report Triage and final judges pet peeves include Acronyms that are not immediately understood and tables with columns headed by greek letters Definition and variable lists that are embedded in a paragraph Equations used without explaining terms and what the equation accom- Copying derivations from other sources-a better approach is to cite the reference and explain briefly Approaches by the Outstanding Papers Six papers were selected as Outstanding submissions because they developed a workable realistic model from their assumptions and used it to answer all elements of the stunt person scenario o made clear recommendations as to the number of boxes used, their size and how they should be stacked, and offered a generalization to other stunt persons wrote a clear and understandable paper describing the problem, their model, and results; and
320 The UMAP Journal 24.3 (2003) Reflections of the Triage Judges • Lots of good papers made it to the final judging • The summary made a significant difference. A majority of the summaries were poor and did not tell the reader the results obtained. A large number of teams simply copied and pasted their introductory paragraphs, which have the different purpose of establishing the background for the problem. • The biggest thing that caught the judges’ eye was whether or not the team paid attention to the questions asked in the problem. A number of teams addressed what they knew but didn’t consider the real question—they got cut quickly. • The next deciding factor was whether a team actually did some modeling or simply looked up a few equations and tried to shoehorn those into the problem. We looked for evidence that modeling had taken place; experiments were good to see. • A final concern was the quality of writing—some was so poor that the judge couldn’t follow or make any sense out of the report. Triage and final judges’ pet peeves include: • Acronyms that are not immediately understood and tables with columns headed by Greek letters. • Definition and variable lists that are embedded in a paragraph. • Equations used without explaining terms and what the equation accomplished. • Copying derivations from other sources—a better approach is to cite the reference and explain briefly. Approaches by the Outstanding Papers Six papers were selected as Outstanding submissions because they: • developed a workable, realistic model from their assumptions and used it to answer all elements of the stunt person scenario; • made clear recommendations as to the number of boxes used, their size, and how they should be stacked, and offered a generalization to other stunt persons; • wrote a clear and understandable paper describing the problem, their model, and results; and
Author-Judge's Commentary 321 o handled all the elements The requiredelements, as viewed by the judges, were in two distinct phases Models needed to consider the mission of the stunt person. A model had to be developed that ensured that the stunt person could jump over the elephant. The better teams then worked to minimize the speed with which to accomplish this jump. Teams that used a high-speed jump were usually discarded by the judges quickly In Phase II, the model had to consider the landing; this included speed, energy, force, and momentum of the jumper, so that the boxes could be used safely to cushion the fall Most of the better papers did an extensive literature and Web search for in- formation about cardboard. However, many teams spent way too much energy researching cardboard; their time would have been better spent in modeling The poorest section in all papers, including many of the Outstanding papers, was the summary Another flaw found by the judges was the misuse of ECT(Edge Compres sion Testing)in a proportionality model. It is true that a proportionality exists, as shown in igure 1; but that proportionality is not the one used by the teams Ferre aite F 1. Force as a function of delta in ect Many started with BCS=5.87×ECT×√Pt develop an energy model, where energy is the area under the curve, name where P is the perimeter of the box and t is its thickness. Many went on 2 PX ECT X h(magic number). However, this proportionality is flawed. The potential energy is an area but it is 2 x Px ECT x 8, which is not an equivalent statement(see Figure 1
Author-Judge’s Commentary 321 • handled all the elements. The required elements, as viewed by the judges, were in two distinct phases. • Models needed to consider the mission of the stunt person. A model had to be developed that ensured that the stunt person could jump over the elephant. The better teams then worked to minimize the speed with which to accomplish this jump. Teams that used a high-speed jump were usually discarded by the judges quickly. • In Phase II, the model had to consider the landing; this included speed, energy, force, and momentum of the jumper, so that the boxes could be used safely to cushion the fall. Most of the better papers did an extensive literature and Web search for information about cardboard. However, many teams spent way too much energy researching cardboard; their time would have been better spent in modeling. The poorest section in all papers, including many of the Outstanding papers, was the summary. Another flaw found by the judges was the misuse of ECT (Edge Compression Testing) in a proportionality model. It is true that a proportionality exists, as shown in Figure 1; but that proportionality is not the one used by the teams. Figure 1. Force as a function of delta in ECT. Many started with BCS = 5.87 × ECT × √ P t, where P is the perimeter of the box and t is its thickness. Many went on to develop an energy model, where energy is the area under the curve, namely 1 2 ×P ×ECT×h(magic number). However, this proportionality is flawed. The potential energy is an area but it is 1 2 × P × ECT × δ, which is not an equivalent statement (see Figure 1)
322 The UMAP Journal 24.3 (2003) The better papers used a variety of methods to model the safe landing These included kinematics, work, and energy absorption. The discussion of the boxes and how they were to be secured was also an important feature. One team laid many large mattress boxes flat on top of the stacked boxes to give a smooth landing area. The judges enjoyed the creativity of the teams in this area About the author Dr. William P. Fox is Professor and the Chair of the Department of Mathematics at Francis Marion Univer- sity. He received his M.S. in operations research from the Naval postgraduate school and his ph. D. in oper ations research and industrial engineering from Clem son University. Prior to coming to Francis Marion, he was a professor in the Department of Mathematical Sciences at the United States Military Academy. He has co-authored several mathematical modeling text books and makes numerous conference presentations on mathematical modeling, as well as being a SIAM lecturer. He is currently the director of the High School Mathematical Contest in Modeling(HiMCM) He was a co-author of last year's airline overbooking Problem
322 The UMAP Journal 24.3 (2003) The better papers used a variety of methods to model the safe landing. These included kinematics, work, and energy absorption. The discussion of the boxes and how they were to be secured was also an important feature. One team laid many large mattress boxes flat on top of the stacked boxes to give a smooth landing area. The judges enjoyed the creativity of the teams in this area. About the Author Dr. William P. Fox is Professor and the Chair of the Department of Mathematics at Francis Marion University. He received his M.S. in operations research from the Naval Postgraduate School and his Ph.D. in operations research and industrial engineering from Clemson University. Prior to coming to Francis Marion, he was a professor in the Department of Mathematical Sciences at the United States Military Academy. He has co-authored several mathematical modeling textbooks and makes numerous conference presentations on mathematical modeling, as well as being a SIAM lecturer. He is currently the director of the High School Mathematical Contest in Modeling (HiMCM). He was a co-author of last year’s Airline Overbooking Problem