Vis enkel innførsel

dc.contributor.authorChoi, Changkyu
dc.contributor.authorBianchi, Filippo Maria
dc.contributor.authorKampffmeyer, Michael
dc.contributor.authorJenssen, Robert
dc.date.accessioned2021-07-02T12:53:06Z
dc.date.available2021-07-02T12:53:06Z
dc.date.created2021-01-13T15:20:35Z
dc.date.issued2020
dc.identifier.citationProceedings of the Northern Lights Deep Learning Workshop. 2020, 1 .
dc.identifier.urihttps://hdl.handle.net/11250/2763134
dc.description.abstractForecasting the dynamics of time-varying systems is essential to maintaining the sustainability of the systems. Recent studies have discovered that Recurrent Neural Networks(RNN) applied in the forecasting tasks outperform conventional models that include AutoRegressive Integrated Moving Average(ARIMA). However, due to the structural limitation of vanilla RNN which holds unit-length internal connections, learning the representation of time series with \textit{missing data} can be severely biased. The goal of this paper is to provide a robust RNN architecture against the bias from missing data. We propose Dilated Recurrent Attention Networks(DRAN). The proposed model has a stacked structure of multiple RNNs which layer of each having a different length of internal connections. This structure allows incorporating previous information at different time scales. DRAN updates its state by a weighted average of the layers. In order to focus more on the layer that carries reliable information against bias from missing data, it leverages attention mechanism which learns the distribution of attention weights among the layers. We report that our model outperforms conventional ones with respect to the forecast accuracy from two benchmark datasets, including a real-world electricity load dataset.
dc.language.isoeng
dc.titleShort-Term Load Forecasting with Missing Data using Dilated Recurrent Attention Networks
dc.typePeer reviewed
dc.typeJournal article
dc.rights.holder©The author(s)
dc.description.versionpublishedVersion
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1
dc.identifier.doi10.7557/18.5136
dc.identifier.cristin1870790
dc.source.journalProceedings of the Northern Lights Deep Learning Workshop
dc.source.volume1
dc.source.pagenumber6


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel