API Reference

This contains a reference document to the floatCSEP API.

Commands

The commands and entry-points with which to call floatcsep from the terminal are:

run(config, **kwargs)

plot(config, **kwargs)

reproduce(config, **kwargs)

Experiment

The experiment is defined using the Experiment class.

Experiment([name, time_config, ...])

Main class that handles an Experiment's context.

Experiment.set_models(model_config[, order])

Parse the models' configuration file/dict.

Experiment.get_model(name)

Returns a Model by its name string

Experiment.stage_models()

Stages all the experiment's models.

Experiment.set_tests(test_config)

Parse the tests' configuration file/dict.

Experiment.catalog

Returns a CSEP catalog loaded from the given query function or a stored file if it exists.

Experiment.set_test_cat(tstring)

Filters the complete experiment catalog to a test sub-catalog bounded by the test time-window.

Experiment.set_tasks()

Lazy definition of the experiment core tasks by wrapping instances, methods and arguments.

Experiment.run()

Run the task tree

Experiment.read_results(test, window)

Reads an Evaluation result for a given time window and returns a list of the results for all tested models.

Experiment.plot_results()

Plots all evaluation results

Experiment.plot_catalog([dpi, show])

Plots the evaluation catalogs

Experiment.plot_forecasts()

Plots and saves all the generated forecasts

Experiment.generate_report()

Creates a report summarizing the Experiment's results

Experiment.to_yml(filename, **kwargs)

Serializes the Experiment instance into a .yml file.

Experiment.from_yml(config_yml[, reprdir])

Initializes an experiment from a .yml file.

Models

A model is defined using the Model class.

Model(name, model_path[, forecast_unit, ...])

Class defining a forecast generating Model.

Model.get_source([zenodo_id, giturl, force])

Search, download or clone the model source in the filesystem, zenodo and git, respectively.

Model.stage([timewindows])

Pre-steps to make the model runnable before integrating

Model.init_db([dbpath, force])

Initializes the database if use_db is True.

Model.rm_db()

Clean up the generated HDF5 File

Model.get_forecast([tstring, region])

Wrapper that just returns a forecast, which should hide the access method (db storage, ti_td, etc.) under the hood

Model.create_forecast(tstring, **kwargs)

Creates a forecast from the model source and a given time window

Model.forecast_from_func(start_date, ...)

Model.forecast_from_file(start_date, ...)

Generates a forecast from a file, by parsing and scaling it to the desired time window.

Model.from_dict(record, **kwargs)

Returns a Model instance from a dictionary containing the required atrributes.

Evaluations

A test is defined using the Evaluation class.

Evaluation(name, func[, func_kwargs, ...])

Class representing a Scoring Test, which wraps the evaluation function, its arguments, parameters and hyper-parameters.

Evaluation.type

Returns the type of the test, mapped from the class attribute Evaluation._TYPES

Evaluation.get_catalog(catalog_path, forecast)

Reads the catalog(s) from the given path(s).

Evaluation.prepare_args(timewindow, catpath, ...)

Prepares the positional argument for the Evaluation function.

Evaluation.compute(timewindow, catalog, ...)

Runs the test, structuring the arguments according to the

Evaluation.write_result(result, path)

Dumps a test result into a json file.

Evaluation.from_dict(record)

Parses a dictionary and re-instantiate an Evaluation object

Accessors

query_gcmt(start_time, end_time[, ...])

from_zenodo(record_id, folder[, force])

Download data from a Zenodo repository.

from_git(url, path[, branch, depth])

Clones a shallow repository from a git url

Extras

Additional pyCSEP functionalities

sequential_likelihood(gridded_forecasts, ...)

Performs the likelihood test on Gridded Forecast using an Observed Catalog.

sequential_information_gain(...[, seed, ...])

param gridded_forecasts:

list csep.core.forecasts.GriddedForecast

vector_poisson_t_w_test(forecast, ...)

Computes Student's t-test for the information gain per earthquake over a list of forecasts and w-test for normality

brier_score(forecast, catalog[, ...])

negative_binomial_number_test(...)

Computes "negative binomial N-Test" on a gridded forecast.

binomial_joint_log_likelihood_ndarray(...)

Computes Bernoulli log-likelihood scores, assuming that earthquakes follow a binomial distribution.

binomial_spatial_test(gridded_forecast, ...)

Performs the binary spatial test on the Forecast using the Observed Catalogs.

binomial_conditional_likelihood_test(...[, ...])

Performs the binary conditional likelihood test on Gridded Forecast using an Observed Catalog.

binary_paired_t_test(forecast, ...[, alpha, ...])

Computes the binary t-test for gridded earthquake forecasts.

log_likelihood_point_process(observation, ...)

Log-likelihood for point process

paired_ttest_point_process(forecast, ...[, ...])

Function for T test based on Point process LL.

Utilities

parse_csep_func(func)

Searchs in pyCSEP and floatCSEP a function or method whose name matches the provided string.

parse_timedelta_string(window[, exp_class])

Parses a float or string representing the testing time window length

timewindows_ti([start_date, end_date, ...])

Creates the testing intervals for a time-independent experiment.

timewindows_td([start_date, end_date, ...])

Creates the testing intervals for a time-dependent experiment.

Task(instance, method, **kwargs)

Task.run()

Task.check_exist()

timewindow2str(datetimes)

Converts a time window (list/tuple of datetimes) to a string that represents it.

plot_sequential_likelihood(evaluation_results)

magnitude_vs_time(catalog)

Readers

ForecastParsers.dat(filename)

ForecastParsers.xml(filename[, verbose])

ForecastParsers.quadtree(filename)

ForecastParsers.csv(filename)

ForecastParsers.hdf5(filename[, group])

HDF5Serializer.grid2hdf5(rates, region, mag)

serialize()