API Reference
This contains a reference document to the floatCSEP API.
Commands
The commands and entry-points with which to call floatcsep from the terminal are:
|
|
|
|
|
Experiment
The experiment is defined using the Experiment
class.
|
Main class that handles an Experiment's context. |
|
Parse the models' configuration file/dict. |
|
Returns a Model by its name string |
Stages all the experiment's models. |
|
|
Parse the tests' configuration file/dict. |
Returns a CSEP catalog loaded from the given query function or a stored file if it exists. |
|
|
Filters the complete experiment catalog to a test sub-catalog bounded by the test time-window. |
Lazy definition of the experiment core tasks by wrapping instances, methods and arguments. |
|
Run the task tree |
|
|
Reads an Evaluation result for a given time window and returns a list of the results for all tested models. |
Plots all evaluation results |
|
|
Plots the evaluation catalogs |
Plots and saves all the generated forecasts |
|
Creates a report summarizing the Experiment's results |
|
|
Serializes the |
|
Initializes an experiment from a .yml file. |
Models
A model is defined using the Model
class.
|
Class defining a forecast generating Model. |
|
Search, download or clone the model source in the filesystem, zenodo and git, respectively. |
|
Pre-steps to make the model runnable before integrating |
|
Initializes the database if use_db is True. |
Clean up the generated HDF5 File |
|
|
Wrapper that just returns a forecast, which should hide the access method (db storage, ti_td, etc.) under the hood |
|
Creates a forecast from the model source and a given time window |
|
|
|
Generates a forecast from a file, by parsing and scaling it to the desired time window. |
|
Returns a Model instance from a dictionary containing the required atrributes. |
Evaluations
A test is defined using the Evaluation
class.
|
Class representing a Scoring Test, which wraps the evaluation function, its arguments, parameters and hyper-parameters. |
Returns the type of the test, mapped from the class attribute Evaluation._TYPES |
|
|
Reads the catalog(s) from the given path(s). |
|
Prepares the positional argument for the Evaluation function. |
|
Runs the test, structuring the arguments according to the |
|
Dumps a test result into a json file. |
|
Parses a dictionary and re-instantiate an Evaluation object |
Accessors
|
|
|
Download data from a Zenodo repository. |
|
Clones a shallow repository from a git url |
Extras
Additional pyCSEP functionalities
|
Performs the likelihood test on Gridded Forecast using an Observed Catalog. |
|
|
|
Computes Student's t-test for the information gain per earthquake over a list of forecasts and w-test for normality |
|
|
Computes "negative binomial N-Test" on a gridded forecast. |
|
Computes Bernoulli log-likelihood scores, assuming that earthquakes follow a binomial distribution. |
|
|
Performs the binary spatial test on the Forecast using the Observed Catalogs. |
|
Performs the binary conditional likelihood test on Gridded Forecast using an Observed Catalog. |
|
Computes the binary t-test for gridded earthquake forecasts. |
|
Log-likelihood for point process |
|
Function for T test based on Point process LL. |
Utilities
|
Searchs in pyCSEP and floatCSEP a function or method whose name matches the provided string. |
|
Parses a float or string representing the testing time window length |
|
Creates the testing intervals for a time-independent experiment. |
|
Creates the testing intervals for a time-dependent experiment. |
|
|
|
|
|
Converts a time window (list/tuple of datetimes) to a string that represents it. |
|
|
|
Readers
|
|
|
|
|
|
|
|
|
|
|
|