In this session, which is a continuation and extension of the former session “HS1.2 – Metrics, measures and objective functions in Hydrology,” we welcome both theoretical and applied contributions addressing the above questions.
General Assembly of the European Geophysical Union, Vienna
The fundamental goals of Science are to formulate generalizations based on observations and testable hypotheses (induction), and to use these generalizations to make statements about particular cases (inference), in support of decision-making. In the Geosciences, induction comprises the formulation of process laws and their implementation in models (hydrological, meteorological, etc.), inference means to make predictions for unobserved places (inter- and extrapolation) and/or times (forecasting); and observation, induction and inference are generally subject to uncertainty. It is now widely accepted that knowledge of this uncertainty is valuable to optimize the design of monitoring systems, build models and make rational decisions.
These goals unite the Geosciences. What separates them are the multitude of data, approaches for building and testing of models (structure diagnosis, parameter optimization and validation), metrics and scores used, and (last but not least) ways to estimate and handle uncertainty. This separation obstructs communication within and across disciplines. While specific disciplines will always require specific approaches, progress towards a commonly applicable framework for generalization (model building) and application (prediction) can be achieved by addressing the following questions:
- How to evaluate the usefulness of data and models for a given task (their information content) in a generalized way?
- How to evaluate the appropriateness (generality, parsimony) of models given the data and the purpose?
- How to evaluate the interplay of data-, model structure- and predictive uncertainty, i.e. the flow of information from data through models to decision-makers?
- How to learn from the encounter of models and data; i.e. how to detect, diagnose and correct model structural errors?
Information theory, dating from at least Shannon (1948) offers a rigorous and universal framework in which information in data and models as well as uncertainty can be addressed. Closely related to Bayesian theory (Jaynes, 2003) it has the potential to serve as a suitable starting point.