Benchmark Report

Tip

  1. Need help? Please let us know in the UMEP Community.

  2. Please report issues with the manual on the GitHub Issues.

  3. Please cite SUEWS with proper information from our Zenodo page.

8. Benchmark Report#

Since v2018a, SUEWS is benchmarked against observations for assessment of model performance. A site based benchmark report generation system is introduced in v2018c to produce detailed reports for testing sites; the number of sites is expanding and more cases will be added as they are benchmarked.

Each report includes the following parts:

  1. Overall performance:

  1. Performance Score: Large scores indicate better performance. The scores are calculated according to weighted averages of statistics for selected benchmark variables.

  2. Detailed Statistics: Grids are coloured based relative performance between different versions: a greener grid indicates better performance in the chosen variable using the specific release whereas a redder one shows poorer performance; and those with gray backgrounds indicate the same performance across different releases.

  1. Cross-comparison in model variables between releases:

  1. Detailed statistics tables: statistics for each variable.

  2. Pair plots: comparison in simulation results between different version-pairs.

  3. Time series plots: comparison in simulated monthly climatologies of diurnal cycles of each variable between different version-pairs.

The latest benchmark reports are available at the SUEWS Benchmark site.