Following up on the GVM kickoff meeting (30 April-1 May, 2012, Edinburgh), an email was sent by G. Valentine to P. Papale, J. Eichelberger, C. Connor, and A. Neri, to initiate discussions and planning on the general issue of hazard models in GVM and their intercomparison. Papale and Eichelberger responded, so are listed as co-writers of these initial recommendations.
For the purposes of this discussion, we use the term “hazard model” to mean an empirical or theoretical representation of a process that is expressed in mathematical form and solved either analytically or numerically, and that: (1) provides an estimate of the areal extent of a hazardous volcanic process (such as tephra transport in the atmosphere, tephra fall, pyroclastic currents, lava flows, lahars, debris avalanches, eruption related seismicity, ground deformation) and the conditions associated with that process (e.g., dynamic pressure, inundation depth); or/and (2) provides an estimate of the probability of a hazardous volcanic process over a defined period of time; or that (3) uses deterministic techniques for interpretation of monitoring signals in order to support forecasting. In practice, most hazard models involve numerical solutions using computer codes. The term verification refers to checking that a code is solving its equations correctly (doing the math correctly); validation means checking that a model is accurately (within some defined acceptable range) representing the physical system that it is intended to capture (usually by comparison with experiments or very well constrained eruptions); benchmarking means comparison of the outputs of different models that are intended to represent the same physical process.
Three main actions were identified:
Action 1. Identify and articulate exactly how GVM would use or promote hazards models, including the standards that these models would need to meet.
Action 2. Identify how hazard models can, or should/could be, compared, given that each is/was developed to meet different requirements.
Action 3. Initiate community-wide “projects” that result in real products to address numbers 1 and 2 above.
A recommendation for Action 1 is that GVM provide a sort of “certification” for models that is based upon standardized documentation and transparency, rather than on “comparison.” The standardized documentation and transparency requirements could include the following: (1) code is open source, (2) clearly stated objectives and ranges of applicability, (3) clear description of the model equations and how they are derived from first principles or data, (4) clear documentation of the numerical method, (5) documentation of verification, validation, and benchmarking, (6) description of uncertainties, (8) user manual, and (9) some sort of formal critical review/assessment of the model by one or more independent researchers. Once a model has GVM certification, the user is assured that these standards have been met, but the user has the flexibility to decide which model is best suited for their purposes.
Action 2 requires an “organic,” community driven process for each class of models (e.g., tephra fall, pyroclastic currents, etc.). The work of the IAVCEI Tephra Working Group is something that can be built upon, and a new effort is in the early stages regarding volcanic mass flow models (being led by S. Charbonnier and using vhub.org as a platform for the effort). However, it is important to keep in mind that comparison of models is not always feasible because each model is typically developed with a different set of requirements (user/developer goals) in mind. This was the experience of the effort initiated more than a decade ago to compare conduit flow models. Lack of comparability may not necessarily mean that a model is bad. This is why we emphasize the transparency and documentation aspects of potential GVM certification, rather than strict comparison of results.
Action 3 basically sets up a management/leadership structure in order to address the other two actions. Both Actions 1 and 2 require resources. Detailed levels of code documentation, benchmarking, and other aspects require significant time commitments by model developers and users, and this time commitment competes with the time they have for research. While there will need to be a degree of “volunteerism,” it is unlikely that there will be success in mobilizing the community around these actions, and producing real results/deliverables, without some funding support for travel and time and effort. In addition, the actions need to be actively managed with real goals, deadlines, and products.
IAVCEI working groups and commissions should be involved and, as mentioned above, there is much to build on particularly with the Tephra Working Group’s efforts. However, these organizations may not have the capability to actively manage work and provide programmatic results. It might make more sense for GVM to establish a Hazard Model Task Force that has some resources and that is committed to developing additional resources (through proposals and grants) in order to support the actions identified above, as well as others that might arise.
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
http://globalvolcanomodel.org/gvm-task-forces/hazard-modelling/
hiii