B - Multiple Models and Tests

The following example is an experiment including multiple time-independent forecasts and multiple evaluations.

TL; DR

In a terminal, navigate to floatcsep/tutorials/case_b and type:

$ floatcsep run config.yml

After the calculation is complete, the results will be summarized in results/report.md.

Experiment Components

The source code can be found in the tutorials/case_b folder or in GitHub. The input structure of the experiment is:

case_b
    └── models
        ├── model_a.csv
        ├── model_b.csv
        ├── model_c.csv
        └── model_d.csv
    ├── config.yml
    ├── catalog.json
    ├── models.yml
    ├── tests.yml
    └── region.txt

Important

Although not necessary, the testing catalog is here defined in the .json format, which is the default catalog used by floatcsep, as it allows the storage of metadata.

Note

A catalog can be stored as .json with CSEPCatalog.write_json() using pycsep

Configuration

In this example, the time, region and catalog specifications are written in the config.yml file.

tutorials/case_b/config.yml
time_config:
  start_date: 2010-1-1T00:00:00
  end_date: 2020-1-1T00:00:00

region_config:
  region: region.txt
  mag_min: 4.0
  mag_max: 8.0
  mag_bin: 0.1
  depth_min: 0
  depth_max: 70

catalog: catalog.json

whereas the models’ and tests’ configurations are referred to external files for better readability

models: models.yml
test_config: tests.yml

Models

The model configuration is now set in the models.yml file, where a list of model names specify their file paths.

tutorials/case_b/models.yml
- Model A:
    path: models/model_a.csv
- Model B:
    path: models/model_b.csv
- Model C:
    path: models/model_c.csv
- Model D:
    path: models/model_d.csv

Evaluations

The evaluations are defined in the tests.yml file as a list of evaluation names, with their functions and plots (see Evaluations). In this example, we use the N-, M-, S- and CL-consistency tests, along with the comparison T-test.

tutorials/case_b/tests.yml
- N-test:
    func: poisson_evaluations.number_test
    plot_func: plot_poisson_consistency_test
- S-test:
    func: poisson_evaluations.spatial_test
    plot_func: plot_poisson_consistency_test
    plot_kwargs:
      one_sided_lower: True
- M-test:
    func: poisson_evaluations.magnitude_test
    plot_func: plot_poisson_consistency_test
    plot_kwargs:
      one_sided_lower: True
- CL-test:
    func: poisson_evaluations.conditional_likelihood_test
    plot_func: plot_poisson_consistency_test
    plot_kwargs:
      one_sided_lower: True
- T-test:
    func: poisson_evaluations.paired_t_test
    ref_model: Model A
    plot_func: plot_comparison_test

Note

Plotting keyword arguments can be set in the plot_kwargs option - see plot_poisson_consistency_test() and plot_comparison_test() -.

Important

Comparison tests (such as the paired_t_test) requires a reference model, whose name should be set in ref_model at the given test configuration.

Running the experiment

The experiment can be run by simply navigating to the tutorials/case_b folder in the terminal an type.

$ floatcsep run config.yml

This will automatically set all the file paths of the calculation (testing catalogs, evaluation results, figures) and will display a summarized report in results/report.md.