floatcsep.evaluation.Evaluation

class floatcsep.evaluation.Evaluation(name, func, func_kwargs=None, ref_model=None, plot_func=None, plot_args=None, plot_kwargs=None, markdown='')[source]

Class representing a Scoring Test, which wraps the evaluation function, its arguments, parameters and hyper-parameters.

Parameters:
  • name (str) – Name of the Test

  • func (str, Callable) – Test function/callable

  • func_kwargs (dict) – Keyword arguments of the test function

  • ref_model (str) – String of the reference model, if any

  • plot_func (str, Callable) – Test’s plotting function

  • plot_args (list,dict) – Positional arguments of the plotting function

  • plot_kwargs (list,dict) – Keyword arguments of the plotting function

__init__(name, func, func_kwargs=None, ref_model=None, plot_func=None, plot_args=None, plot_kwargs=None, markdown='')[source]

Methods

__init__(name, func[, func_kwargs, ...])

as_dict()

Represents an Evaluation instance as a dictionary, which can be serialized and then parsed

compute(timewindow, catalog, model, path[, ...])

Runs the test, structuring the arguments according to the

from_dict(record)

Parses a dictionary and re-instantiate an Evaluation object

get_catalog(catalog_path, forecast)

Reads the catalog(s) from the given path(s).

parse_plots(plot_func, plot_args, plot_kwargs)

plot_results(timewindow, models, tree[, ...])

Plots all evaluation results

prepare_args(timewindow, catpath, model[, ...])

Prepares the positional argument for the Evaluation function.

read_results(window, models, tree)

Reads an Evaluation result for a given time window and returns a list of the results for all tested models.

write_result(result, path)

Dumps a test result into a json file.

Attributes

type

Returns the type of the test, mapped from the class attribute Evaluation._TYPES