MeteoIODoc  MeteoIODoc-2.7.0
General concepts

A large number of the problems encountered by users of numerical models working on large meteorological data sets can be traced back to the Input/Output functionality. This comes from the focus of the model developers on the core modeling issues at the expanse of the I/O routines that are costly to properly implement. Therefore the I/O routines often lack flexibility and robustness. When using numerical models in operational applications, this becomes a major drawback and a regular source of problems.

The MeteoIO library has been designed to address this issue. It is an additional layer between the data and the numerical model, handling the retrieval of data from various data sources as well as the data pre-processing.

It is built as a library, which means that it can be transparently integrated within other softwares along other libraries. This would be similar to the way a car is made of several components: there is a frame (for example, for us it could be the MeteoIO library), there is an engine (for a model such as Snowpack, it could be its "libSnowpack" library that does all the heavy work) and on top there is the whole bodywork (all the "glue" that is needed to make the libSnowpack usable by an end user as a standalone application, here called "snowpack-app"). Therefore the same components can be used in other models (the same engine might be used in several cars while the same bodywork might contain different engines).

library_analogy.png
Software library analogy: a software is made of several components (libraries)

General MeteoIO structure

meteoio_workflow.png
MeteoIO workflow
MeteoIO can be seen as a set of modules that is focused on the handling of input/output operations (including data preparation) for numerical simulations in the realm of earth sciences. On the visible side, it offers the following modules, working on a pre-determined set of meteorological parameters or on parameters added by the developer:

  • a set of plugins for accessing the data (for example, a plugin might be responsible for fetching the raw data from a given database)
  • a set of raw data editing methods to select/merge/convert the raw data
  • a set of filters and processing elements for applying transformations to the data (for example, a filter might remove all data that is out of range)
  • a set of resampling algorithms to temporally interpolate the data at the required timestamp
  • a set of parametrizations to generate data/meteorological parameters when they could not be interpolated
  • a set of spatial interpolation algorithms (for example, such an algorithm might perform Inverse Distance Weighting for filling a grid with spatially interpolated data)

Each of these steps can be configured and fine tuned according to the needs of the model and the wishes of the user. Moreover, a few assumptions are made about the data that you are using: each data point has to be associated with a geographic location (defined by some sort of coordinates) and very often you will also need to provide a Digital Elevation Model.

Typical setup

typical_setup.png
typical setup of MeteoIO for operational applications
MeteoIO has been designed to accomodate both the needs of carefully crafted simulations for a specific purpose/study and for the needs of operational simulations that run automatically and unattended. A typical setup for such operational applications consists of a data acquisition system (made of various sensors, usually mounted on a common mast, thus seen as belonging to a station and some system to bring the data back to some sort of data repository), a data storage system that usually has some way of also distributing the data (often a database but sometimes only data files on a disk) and is mostly seen as the data source by the application, some applications using the data and producing results that are published to their end users (either to an automated system that one can connect to or to some visualization tool that one can use to explore the results).

In this setup, MeteoIO is the "glue" between the numerical model at the core of the application and the data sources on one hand and the publication system on the other hand.