Rethinking big data analytics for critical infrastructure resilience
Abstract
The concept of `resilience analytics' has recently been proposed as a means to leverage the promise of big data and associated analytic techniques to improve the resilience of interdependent critical infrastructure systems and the communities supported by them. Given (1) the prevalence of high-profile natural and man-made disasters, (2) the growing interest in resilience as a concept for managing complexity, and (3) recent advances in machine learning and other data-driven analytic techniques, the temptation to accept and develop resilience analytics without question is almost overwhelming. We examine the efficacy of resilience analytics by answering a single motivating question: Can big data analytics help critical infrastructure systems adapt to surprise? We find big data analytics capable to support resilience to rare, situational surprises captured in analytic models. However, when critical infrastructure systems are challenged by fundamental surprises never conceived during analytic model development, adoption of resilience analytics may prove useless for decision support or even harmful by increasing dangers during unprecedented events. We demonstrate that these dangers are not limited to a single infrastructure context by highlighting the limits of analytic models in recent disasters, including hurricanes, failing dams, flash crashes in the stock market, and cascading electric power failures. We conclude that big data analytics alone are not able to adapt to the very events that motivate their use, and greater adoption of resilience analytics may ironically make critical infrastructure systems more vulnerable to surprise.
Metrics
Published
Issue
Section
License
Copyright (c) 2019 David Alderson, Daniel Eisenberg, Thomas Seager
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.