Researchers Introduce AI, Deep Learning Model for Improved Earthquake Forecasting
The day after the massive riots in Minneapolis and Lake Street on May 29th. Here was an O'Reilly Auto Parts that was burnt down, but right above was a billboard that remained intact that says "Coming Soon". A foreboding prediction? [Image - Parker Johnson@Unsplash ] 

Researchers from the Universities of California at Berkeley and Santa Cruz, and the Technical University of Munich recently introduced the Recurrent Earthquake foreCAST (RECAST), a deep learning model for earthquake forecasting and explore its performance on synthetic and regional earthquake data sets.

A typical earthquake forecasting relied heavily on statistical models that do not scale to fully utilize the currently available large earthquake data sets. The researchers build on recent developments in deep learning for forecasting event sequences in general to create an implementation for earthquake data.

In their recently released a paper, the researchers said that the deep learning model can use larger datasets and offer greater flexibility than the current model standard, ETAS, which has improved only incrementally since its development in 1988.

The researchers used NVIDIA GPU workstations to train the model.

The promise of RECAST is that its model flexibility, self-learning capability and ability to scale will enable it to interpret larger datasets and make better predictions during earthquake sequences.

As of now, ETAS model is the most popular model to describe frequency of earthquakes in a region and is based on the theory that every earthquake has a magnitude-dependent ability to trigger aftershocks. The ETAS model considers that the earthquake sequence comprises aftershocks (events triggered by other earthquakes) and background earthquakes (events that occur independent of other earthquakes).

RECAST is fundamentally different from ETAS. The exact functional form that relates catalog data (records of earthquake data) to the likelihood of the next event time is NOT required.

Tests on synthetic data suggest that with a modest-sized data set, RECAST accurately models earthquake-like point processes directly from cataloged data. The research paper further says that — Tests on earthquake catalogs in Southern California indicate improved fit and forecast accuracy compared to our benchmark when the training set is sufficiently long (>10⁴ events). The basic components in RECAST add flexibility and scalability for earthquake forecasting without sacrificing performance.

RECAST builds on recent developments in machine learning known as neural temporal point processes or TPPs. The model uses a general-purpose encoder-decoder neural network architecture that predicts the timing of the next event given the history of past events.

This research study is a proof of concept for the application of neural temporal point process models for earthquake forecasting. More generally, neural temporal point process models and deep learning provide important advantages. They do not require knowledge of the relationship between earthquake features to forecasting probabilities.

With the larger datasets, the researchers are starting to see improvements from RECAST over the standard ETAS model.

This new deep learning based approach allows researchers to incorporate larger data sets, potentially with more information about each earthquake

To advance the state of the art in earthquake forecasting, Dascher-Cousineau, who is one of the authors of the research papers, is working with a team of undergraduates at UC Berkeley to train earthquake catalogs on multiple regions for better predictions.


Via ~ NVIDIA Blog

Advertisements

Post a Comment

Previous Post Next Post