Australian judges’ commentary for 2018

Introduction

The Australian IMMC judging panel congratulates the 75 teams from across Australia that registered to participate in the 2018 Challenge and went on to prepare and submit team reports. It is wonderful to see the growth of interest in the IMMC in Australia, which reflects growing interest in IMMC and in mathematical modelling more generally around the world. Judges were impressed to see the very good features present in many of the reports. Some teams did very well, in getting “right to the heart of the matter”.

For the 2018 Challenge, a total of 163 teams registered to participate, showing a level of engagement substantially above the 75 teams actually submitting a report. These increased levels of interest and participation suggest that schools and teachers are keen to support such activities. It reflects a desire by students to engage in challenging activities that allow them to work together collaboratively on problems with real-world significance. The IMMC presents an opportunity to make connections between the mathematical learning students experience at school and the many situations around us in which mathematical thinking and knowledge might be really useful.

The 2018 problem, “The Best Hospital”, asked students to consider the factors that contribute to variation in hospital quality, and to devise a decision-making model that would permit a prospective patient to choose a hospital under circumstances other than an emergency situation. 

Teams could choose a period of up to five days within a specified 17-day window to work on the problem. Teams were expected to prepare a written report of up to 20 pages, plus a one-page summary, and a user-friendly memo of up to two pages that would help a non-expert use the team’s results to make a decision about which hospital to choose.

This was a particularly challenging problem given the complexity of the context, the range of different variables that might be relevant, and the difficulty of obtaining data to analyse the possibilities, devise and test a decision-making model, and evaluate the stability and usefulness of the model developed. Perhaps the biggest challenge for teams was to complete their work in the very limited time available. One of the consequences of this time pressure was that errors with mathematical symbols and representations were often not detected and fixed prior to submitting reports. Future teams may find it helpful to set a timetable for the five working days, so that all stages of the process receive due attention, including sufficient time to prepare a polished report.

Background issues that tripped up some teams 

The IMMC website provides instructions and rules, and advice about preparing a team report for submission. This information was overlooked by some teams, which in some cases resulted in the team report being declared ineligible for further consideration. In other cases reports contained material that was not considered by the judges. Here are some provisions that future teams should note carefully:

  • Submissions must be in digital form, in English, comprising only written text, possibly figures or charts, and all in a form that can easily be printed out on paper (this means, for example, that computer applications should not be submitted).
  • Reports should not exceed 20 pages, plus a one-page summary sheet (and in 2018 an additional two-page memo). Appendices and references can be added, and these do not count towards the page limit.
  • Each page of the solution must contain the team control number at the top of the page. The names of students, advisors or institution must NOT appear on any page of the report. To achieve anonymity and fairness in the judging process, the only identifying information permitted is the team control number.

Observations from the more successful team reports

1. General observations

The IMMC website includes a list of ‘additional recommendations’ intended to help teams produce and present the best possible solution paper. The points are reproduced here:

  • Conciseness and organisation are extremely important. Key statements should present major ideas and results.
  • Present a clarification or restatement of the problem as appropriate.
  • Present a clear exposition of all variables, assumptions and hypotheses.
  • Present an analysis of the problem, motivating or justifying the modelling to be used.
  • Include a design of the model. Discuss how the model could be tested.
  • Discuss any apparent strengths or weaknesses of your model or approach.
  • Incorporate lengthy derivations, computations or illustrative examples in appendices. Summarise these in the main report. Results must be explicitly stated in the body of the report.

The more successful team reports in the 2018 Challenge included the following general features from that list:

Define the problem: The teams clarified or restated the problem in their own words to show exactly how they would interpret the problem (for example, what ‘best’ could mean) and how they would translate it into the problem they would actually work on. Sometimes this involved simplifying the problem or identifying elements that would be omitted or treated later.

Identify all variables and assumptions: The teams presented clear definitions of the variables they chose to consider and gave reasons for their choice. Sometimes this included identifying assumptions made in choosing variables and explaining the importance of those assumptions.

Test the model: As well as clearly presenting the model developed by the team, the model should be evaluated. This can include describing its strengths and weaknesses. An important part of testing a model involves investigating how sensitive it is to changing the assumptions on which it was based, or to changing the data values used to develop the model. The model is intended to solve a real problem, so a consideration of how effectively the model does that is essential. If a small change in one of the parameter values creates big changes in outcome, the model may need change.

Organise the report: The better team reports were clear and concise. They did not include the full detail of their brainstorming, revisions and refinements of their model(s) but, when a selected list of assumptions was used in some way, the relevance of each assumption was justified and the team explained how and why it was included. If the team included several factors but decided some factors were more important than others, reasons were given. Additional material not central to the final solution, including computer code, was placed in an appendix rather than in the body of the report.

2. Specific observations

The following comments have been formulated by the Australian judging panel. The comments aim to highlight specific lessons learned from the judging.

The fundamental approach to the problem:

A first issue relates to basic interpretation of the task. The better reports showed that the team appreciated the importance of words in the problem statement such as ‘best’, and ‘choice’. The first paragraph in the problem statement directs attention to non-emergency situations in which choice of the best hospital is the key concern. How should ‘best’ be defined? What variables that determine quality should be used and how should they be measured?

  • Many teams failed to take this on board adequately. They mixed issues relating to emergency care into their considerations. For example, some teams spent considerable effort defining emergency triage processes and outcomes, which may be largely irrelevant to the kind of decision needed to choose a hospital under the conditions specified. 
  • If medical and survival outcomes of a hospital’s emergency department are to be factored into the definition of ‘best hospital’, then the variety of factors that differentiate general provision and emergency provision should be treated clearly and directly, so that the relevance of emergency treatment outcomes to a judgement of hospital quality are made apparent. For example, an emergency department might specialise in road trauma, which would slant analysis of treatment outcomes in a particular way.
  • The 2018 problem assumed the patient's condition was known, and that hospital treatment was needed. Therefore, factors such as timeliness or accuracy of diagnosis at, or prior to, admission and time needed to rush a patient to hospital by ambulance are not relevant.

A second issue fundamental to interpreting the problem statement is the need to devise a model to compare hospitals, not to seek factors that might reduce avoidable deaths.

  • Many teams did interesting work to identify factors that affect mortality rates and other desirable features in a treatment environment. The better team reports went on to apply the results to inform comparisons and a choice among hospitals.

A third key observation is the extent to which team reports provided a two-page memo that would help a non-technical person to choose a hospital. This expectation was clearly expressed in the problem statement. 

  • The better team reports presented a memo giving a very clear guide to choosing among different hospitals, based on a model that they properly understood and could communicate. Their memos did not require special medical expertise, high-level mathematical expertise or skill at computer programming.

Definitions and treatments

The better team reports defined the important terms and concepts that they used and treated them consistently throughout their investigation.

  • The better team reports introduced a clear working definition of the term ‘mortality rates’, including how they distinguish evitable and inevitable deaths. In treating this variable, the better reports clearly showed they had worked with rates rather than counts.

Testing and evaluation of models

To evaluate and test a model, the use of real data is usually preferred, when it is available. The better team reports looked for a way to obtain data for this purpose, or at least they simulated data that could be used. In a ‘sensitivity analysis’ of a model – for example, “how would our results change if value X were to change by 10%?” – real data is preferred. But if only invented data was used, variations may still be deliberately explored when testing ‘sensitivity’ of results.

How the judging panel used the assessment criteria

The IMMC website provides the following checklist for report writing:

  • Describing the real-world problem being addressed.
  • Specifying the resulting mathematical questions precisely.
  • Listing all assumptions and their justification.
  • Indicating sources of imported information (for example, websites).
  • Explaining how numerical values used in calculations were decided on.
  • Showing and justifying all mathematical working.
  • Setting out all mathematical working, graphs, tables, and so on.
  • Interpreting mathematical results in terms of the real-world problem.
  • Evaluating the result. Does your answer make sense? Does it help to answer the problem?
  • Dealing with refinements to the original problem.
  • Qualifying the solution.
  • Recommending the solutions arising from the work. What further work is needed?

The IMMC judging panel used a set of criteria based on that checklist. The panel was aware of at least three aspects of the reports they were viewing: the modelling process that teams followed, the mathematical work that was presented, and the clarity and quality of the communication shown in the report.

We note five areas of special interest in evaluating the 2018 modelling efforts:

  1. Problem specification
    Any problem statement needs to be interpreted so that the real-world problem to be solved has been clearly identified, and this has been translated into one or more precise mathematical questions that the team will answer.

    For the 2018 problem, teams needed to interpret a number of terms and ideas such as ‘evitable’ and ‘inevitable’ deaths, what ‘best’ would mean, and how a ‘choice’ could be made. To be most effective, the work needed to be built around some quite specific mathematical way(s) of quantifying these concepts and terms in order to support decision making.
  2. Model formulation
    Formulating a mathematical model can involve making certain assumptions (and explaining why they are made or needed), choosing the specific variables that will be analysed (and justifying those choices), identifying suitable data that could be used (the data may be available, or if not available, some simulation could be carried out, or a combination of real and simulated data could be used), choosing particular values of parameters to be investigated (and justifying those choices), and devising a suitable mathematical way to represent the variables and parameters that make up the model.

    For the 2018 problem, several teams made use of one or more versions of a ‘standardised mortality ratio’. In some cases, this variable was defined and applied to a particular medical procedure, in other cases to an entire hospital. Other approaches similarly reflected the fact that teams considered the most important variables to relate to the particular condition experienced by the person seeking admission, and some teams found other ways to focus most strongly on hospital-wide variables. Justification of the choices made in that regard was keenly sought by the panel. 

    In proposing a model, a team should check its algebraic or logical form, to make sure every function will be well-defined for possible parameter ranges (for example, no division by zero); and that meaningful measures are guaranteed (for example, percentages always lie on the interval [0, 100]).
  3. Mathematical processing
    The mathematics used should be relevant and correctly applied. Some teams attempted to use quite high-level mathematics. In some cases this was successful, but this was not always the case. Other teams were less ambitious in the level of the mathematical procedures they applied, and what they did was done correctly and effectively. 

    Mathematical processing could include the use of technology. In solving the 2018 problem, some teams proposed the use of computer code to process data – in many cases, it was not fully explained and its purpose was not always clear. Other teams used graphing technology to present their results in useful ways that helped the reader interpret conclusions. 

    Some teams located useful data, others used simulated data to support their investigation. In either case, the judges looked carefully for evidence that teams tested how a change in the data might affect the team’s conclusions.

    Whatever mathematical processing is carried out, teams need to interpret their mathematical results in relation to the problem under investigation. For example, in presenting statistical arguments, a team should be able to interpret any mean or standard deviation (or other measure of spread) and explain it in their report.
  4. Model evaluation
    There are different ways to approach the evaluation of models, which we see as an essential feature of good mathematical modelling. Several teams identified strengths and weaknesses of their approach and of the models they proposed in relation to the problem being solved. In a small number of cases, that kind of evaluation was supported by some evidence from their analysis. 

    In a very small number of cases, teams recognised shortcomings in their approach and introduced some refinements to their work. A team shows its modelling skill partly by indicating how a model has been revised/refined in the light of testing using data, or for other reasons. 

    In almost no cases did teams evaluate the sensitivity of their solution to changed assumptions or conditions. Teams could have explored, for example, the impact on their predicted outcomes if the mortality rate were to increase or decrease by some specified amount. Similarly, the impact of changing other variables included in their extended model could have been considered.
  5. Report quality
    IMMC teams have produced different styles of modelling reports. Some have been structured around the elements of the problem statement and the mathematical results obtained, others have been structured around the steps of the modelling process followed. Either style can be more than acceptable, as long as the report is a coherent and succinct communication of the team’s findings.

    Ideally, spelling mistakes and poor grammar should not be seen. They mar the appearance and readability of any report. In some cases, they can make the intended meaning ambiguous.

    In 2018, a brief glossary of medical terms used, and setting out the definitions used by the team, was a useful inclusion in some team reports.