Report from Australia’s IM2C judging panel, 2016

This report was prepared by the Australian judging panel for the International Mathematical Modeling Challenge (IM2C), following the completion of judging of Australian team reports submitted for the 2016 Challenge.

Download the IM2C 2016 Problem

General comments: the criteria used by Australian judges

The Australian judging panel used a set of criteria, and a judgment system that involved rating report content against those criteria, to evaluate and compare the merits of Australian team entries to IM2C in 2016. Reports were rated with a value of 0-3 on each criterion, and the rating total was a key element of the judgments made. Five broad criteria were used, and component sub-criteria for each broad area.

The following general comments summarise advice from the judges in response to the team reports submitted in 2016. The advice aims to guide schools, team advisors and students to participate as effectively as possible in future Challenges. The advice is focussed around the five broad criteria used to evaluate team reports.

Problem statement

An essential early step in working on a modelling problem is to decide what the problem is really about. Sometimes this is not obvious.

For the 2016 problem, scaffolding was provided in the form of specific questions – these were presented to guide teams through the stimulus material and to help them make worthwhile progress towards the desired end result – as well as a dataset. However, this scaffolding may have made some teams think that the main problem was to answer these questions. In fact, a better approach would have been to see those as stepping stones towards the problem as articulated in the final question, which was about formulating well-argued and well-supported advice to the race organising committee about how it should handle the risks associated with offering incentive payments to competing athletes across a number of events.

Model formulation, assumptions and their justification, choice of variables and parameter values

A number of interesting models were put forward as tools to tackle different parts of the problem. The best models were clearly identified, and the variables used in the models were fully explained. The assumptions underlying the models used were also identified, explained and justified. In solving real-world modelling problems, it is usually necessary to impose assumptions in order to simplify the situation sufficiently to make progress towards a solution.

In addition, the choice of data values to test, demonstrate or to justify the models used is a significant part of the modelling process, and therefore also warrants solid argument and justification as to the choices made.

Testing the sensitivity of a model is also an important step in developing a model that will function as desired in the situation for which it is built. Using a range of parameter values to evaluate this sensitivity is often a very useful activity.

The use of mathematics, mathematical processing

The particular mathematical tools that could be applied to any modelling task can vary enormously. Sometimes a simple piece of mathematics can be all that is needed to analyse some aspect of the problem situation. The level of complexity of the mathematics used is not necessarily a critical factor, rather it is the relevance, appropriateness and usefulness of the mathematics that is most important, and its interpretation in relation to the situation under study. The key factor is the fitness for purpose of the mathematics used.

For example, a graph of data that shows a relationship that is not linear can provide a clue that a non-linear model for the relationship should be explored. It would not matter how well a linear regression model fits if it is not at all appropriate for this particular problem.

The nature and level of mathematical content of good modelling reports can vary, and the relationship between the level and complexity of mathematics used to the overall quality of the report is also not direct. Use of simple mathematics presented as part of a well-argued and clearly described solution can be far more effective than attempts to use more advanced techniques that are poorly explained or justified.

Whatever the level of mathematics used, though, it should be used correctly, and this feature of a mathematical modelling report will stand out very clearly to reviewers, particularly because they will have a mathematical background themselves. For example, if a report refers to ‘equations’ that are missing one or more of a left-hand-side, an equals sign, or a right-hand-side (and so are expressions rather than equations), this will not be viewed positively. Similarly, if a formula from a book or article is used, it must be transcribed and applied correctly.

The use of mathematical tools, especially technological tools such as programmable calculators, or the use of statistical software, can also be a help as well as a hindrance. Wise decisions about the use of technology will stand out in a well-considered piece of modelling, as will poor decisions.  Graphing tools can be very powerful aids to investigating relationships among variables, and statistical techniques can be very helpful in establishing best fit relationship models. But including many colourful graphs that serve no real purpose will generally not contribute positively to a modelling report. Wherever possible, predictions made from the models applied should be checked against available (or constructed) data; and all mathematical work undertaken should be interpreted in relation to the problem under study.

It should be obvious to a reader why a particular mathematical tool or technique was chosen as the most useful in the circumstances. This will usually warrant explanation and justification, and results from using that tool should be interpreted with clear reference to its relevance to the purpose of the work.

Model evaluation

An essential element of a good modelling effort involves thorough evaluation of the model. Does it give information needed to make decisions about the questions at issue? To what extent would changes in the assumptions made, and changes to the settings assumed in the way the problem is specified, affect the usefulness of the model? Sensitivity analysis and other forms of model evaluation can be very challenging, but they must be pursued.

Model evaluation can be undertaken by varying the numeric values that could apply in the situation under study, including by finding suitable data from relevant sources. In some circumstances, this can be done systematically through simulating data, for example where historical data cannot easily be obtained.

Report quality

A good quality report of the outcomes of a modelling task can take a variety of forms, and the most important feature of a high quality report is that it meets the need at hand. In the case of the 2016 IM2C problem, for example, the context of the problem statement provided the perfect opportunity to imagine the organising committee as the key audience for a piece of written advice as to what they should do regarding managing their event, attracting high-level performers, and managing the associated risks through their insurance decisions. Such a report may have contained some mathematics, but only to the extent that it would help the organising committee understand the advice and reasons for it.

A good modelling report does not look like a mathematics assignment. It is not a set of results as if it were a response to a test or a structured assignment. It’s a piece of analysis and discussion that meets the needs of the particular situation under study. Detailed calculations and other mathematical content would often best be presented as an attachment or appendix to the main report.

In writing a summary, the goal should be to engage the reader and to draw them in to the analysis provided. In the 2016 case, an ‘executive summary’ could have been a useful model for the report summary.

Specific comments about responses to the stimulus questions for 2016

Responses to Q1:

A significant differentiating feature of responses to Q1 (what is the average cost of the bonus?) was the way teams dealt with the male/female distinction. The key recognition is that there have effectively been 62 races, with three World Record results. One consequence of this is that it allows for the possibility of both male and female records being broken together.

Most teams used some version of ‘average likelihood = number of records / number of events’, but some did their calculation based on the average length of the gaps between World Record occurrences. Both methods imply a degree of linearity that is probably not well-founded. Averages calculated across the longest possible time frame seemed better.

A key aspect of finding a suitable average is predicting when (frequency, likelihood) the next World Record might occur. This was often treated in depth in responses to Q2. A couple of responses modelled this using a logarithmic function, pointing out that World Records become less likely (less frequent) over time, on average. However, other responses were rather mixed on whether a record that has stood for a long time is more likely or less likely to be broken in the foreseeable future.

One creative response recognised that the World Record could be broken in the next event, hence (number of past WR occurrences + 1) was seen as the appropriate numerator in the calculation.

Responses to Q2:

Teams used a wide range of factors in considering criteria the insurance company should use to determine an appropriate margin. The best reports gave clear explanation and justification of the relevance of those factors.

Many teams found the quantification of weighting factors an insurmountable challenge. When weightings were proposed, the justification of choices made was not always apparent.

Consideration could have been given to varying the amount of mark-up – that is, to consider what might have been the impact of using parameter values other than the 20% value suggested in the question.

Responses to Q3:

Teams also used a wide range of factors in considering the criteria the organising committee should use to decide whether or not to take out insurance. As with the perspective of the insurer, the opportunity to quantify the weighting of factors and to explain and justify the choices made was not always taken up. In relation to both the perspective of the organising committee and the insurer, it was not always clear how the factors proposed could be used to make the decisions needed.

Considerations in this part of team reports sometimes raised issues in a way that did not spell out the importance and relevance of proposed factors. Many areas of uncertainty were discussed in this context, such as the financial strength of the organisation, the value of ticket sales, and running costs of the event, which could have given rise to detailed model testing and evaluation. An alternative approach could have been to assume some form of zero-sum game, where the potential liabilities should be accounted for independently of the income or wealth of the organisation.

Responses to Q4:

Very few teams generalised an approach to a wider context in which many events might warrant the same kind of consideration.

Responses to Q5:

Several teams simply didn’t attempt to respond to the final question in any meaningful way. The general decision-making schemes that were proposed tended to follow a sequence of questions and possible decision paths, with very limited quantification to support the decisions needed.

Only one team presented a generalised and rigorous method of deciding when to insure for any of an unlimited number of events.


ROSS TURNER
Chair of judging panel
17 June 2016
contact@immchallenge.org.au