C - Multiple Time Windows

TL; DR

In a terminal, navigate to floatcsep/examples/case_c and type:

$ floatcsep run config.yml

After the calculation is complete, the results will be summarized in results/report.md.

Artifacts

The following example shows an experiment with multiple time windows. The input structure of the experiment is:

case_c
    └──  models
        ├── model_a.csv
        ├── model_b.csv
        ├── model_c.csv
        └── model_d.csv
    ├── config.yml
    ├── catalog.json
    ├── models.yml
    ├── tests.yml
    └── region.txt

Configuration

Time

The time configuration now set the time intervals between the start and end dates.

time_config:
  start_date: 2010-1-1T00:00:00
  end_date: 2020-1-1T00:00:00
  intervals: 10
  growth: cumulative

Note

The time interval growth can be either cumulative (all windows start from start_date) or incremental (each window starts from the previous window’s end).

The results of the experiment run will be associated with each time window (2010-01-01_2011-01-01, 2010-01-01_2012-01-01, 2010-01-01_2013-01-01, …).

Evaluations

The experiment’s evaluations are defined in tests.yml, which can now include temporal evaluations (see sequential_likelihood(), sequential_information_gain(), plot_sequential_likelihood()).

- S-test:
    func: poisson_evaluations.spatial_test
    plot_func: plot_poisson_consistency_test
    plot_args:
      title: Poisson S-test
      xlabel: Log-Likelihood

- Sequential Log-Likelihood:
    func: sequential_likelihood
    plot_func: plot_sequential_likelihood
    plot_args:
      title: Cumulative Log-Likelihood
      ylabel: Information Gain

- Sequential Information Gain:
    func: sequential_information_gain
    plot_func: plot_sequential_likelihood
    ref_model: Model A
    plot_args:
      title: Cumulative Information Gain (Model A as reference)
      ylabel: Information Gain

Note

Plot arguments (title, labels, font sizes, axes limits, etc.) can be passed as a dictionary in plot_args (see details in plot_poisson_consistency_test())

Results

The run command creates the result path tree for all time windows.

  • The testing catalog of the window is stored in results/{window}/catalog in json format. This is a subset of the global testing catalog.

  • Human-readable results are found in results/{window}/evaluations

  • Catalog and evaluation results figures in results/{window}/figures.

  • The complete results are summarized in results/report.md

The report now shows the temporal evaluations for all time-windows, whereas the discrete evaluations are shown only for the last time window.