Blind testing of shoreline evolution models.
Montaño, Jennifer; Coco, Giovanni; Antolínez, Jose A.A.; Beuzen, Tomas; Bryan, Karin R.; Cagigal, Laura; Castelle, Bruno; Davidson, MarkA.; Goldstein, Evan B.; Ibaceta, Raimundo; Idier, Déborah; Ludka, Bonnie C.; Masoud-Ansari, Sina; Méndez, Fernando J.; Murray, A. Brad; Plant, NathanielG.; Ratliff, Katherine M.; Robinet, Arthur; Rueda, Ana; Sénéchal, Nadia; Simmons, JoshuaA.; Splinter, Kristen D.; Stephens, Scott; Townend, Ian; Vitousek, Sean; Vos, Kilian
Files
Published version, 6.051Mb
Citation
Export citationMontaño, J., Coco, G., Antolínez, J. A. A., Beuzen, T., Bryan, K. R., Cagigal, L., … Vos, K. (2020). Blind testing of shoreline evolution models. Scientific Reports, 10(1), 2137. https://doi.org/10.1038/s41598-020-59018-y
Permanent Research Commons link: https://hdl.handle.net/10289/13737
Abstract
Beaches around the world continuously adjust to daily and seasonal changes in wave and tide conditions, which are themselves changing over longer time-scales. Different approaches to predict multi-year shoreline evolution have been implemented; however, robust and reliable predictions of shoreline evolution are still problematic even in short-term scenarios (shorter than decadal). Here we show results of a modelling competition, where 19 numerical models (a mix of established shoreline models and machine learning techniques) were tested using data collected for Tairua beach, New Zealand with 18 years of daily averaged alongshore shoreline position and beach rotation (orientation) data obtained from a camera system. In general, traditional shoreline models and machine learning techniques were able to reproduce shoreline changes during the calibration period (1999-2014) for normal conditions but some of the model struggled to predict extreme and fast oscillations. During the forecast period (unseen data, 2014-2017), both approaches showed a decrease in models' capability to predict the shoreline position. This was more evident for some of the machine learning algorithms. A model ensemble performed better than individual models and enables assessment of uncertainties in model architecture. Research-coordinated approaches (e.g., modelling competitions) can fuel advances in predictive capabilities and provide a forum for the discussion about the advantages/disadvantages of available models.
Date
2020Type
Rights
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Te images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
© Te Author(s) 2020