C - Multiple Time Windows
The following example shows an experiment with multiple time windows.
TL; DR
In a terminal, navigate to floatcsep/tutorials/case_c
and type:
$ floatcsep run config.yml
After the calculation is complete, the results will be summarized in results/report.md
.
Experiment Components
The source code can be found in the tutorials/case_c
folder or in GitHub. The input structure of the experiment is:
case_c
└── models
├── model_a.csv
├── model_b.csv
├── model_c.csv
└── model_d.csv
├── config.yml
├── catalog.json
├── models.yml
├── tests.yml
└── region.txt
Configuration
Time
The time configuration now sets a sequence of time intervals between the start and end dates.
time_config: start_date: 2010-1-1T00:00:00 end_date: 2020-1-1T00:00:00 intervals: 10 growth: cumulativeNote
The time interval
growth
can be eithercumulative
(all windows start fromstart_date
) orincremental
(each window starts from the previous window’s end).The results of the experiment run will be associated with each time window (
2010-01-01_2011-01-01
,2010-01-01_2012-01-01
,2010-01-01_2013-01-01
, …).
Evaluations
The experiment’s evaluations are defined in
tests.yml
, which can now include temporal evaluations (seesequential_likelihood
,sequential_information_gain
,plot_sequential_likelihood
).- S-test: func: poisson_evaluations.spatial_test plot_func: plot_poisson_consistency_test plot_args: title: Poisson S-test xlabel: Log-Likelihood - Sequential Log-Likelihood: func: sequential_likelihood plot_func: plot_sequential_likelihood plot_args: title: Cumulative Log-Likelihood ylabel: Information Gain - Sequential Information Gain: func: sequential_information_gain plot_func: plot_sequential_likelihood ref_model: Model A plot_args: title: Cumulative Information Gain (Model A as reference) ylabel: Information GainNote
Plot arguments (title, labels, font sizes, axes limits, etc.) can be passed as a dictionary in
plot_args
(see the arguments details inplot_poisson_consistency_test()
)
Results
The run
command
$ floatcsep run config.yml
now creates the result path tree for all time windows.
The testing catalog of the window is stored in
results/{time_window}/catalog
injson
format. This is a subset of the global testing catalog.Human-readable results are found in
results/{time_window}/evaluations
Catalog and evaluation results figures in
results/{time_window}/figures
.The complete results are summarized in
results/report.md
The report shows the temporal evaluations for all time-windows, whereas the discrete evaluations are shown only for the last time window.