Long-term Forecasting using Tensor-Train RNNs

Rose Yu

Stephan Zheng

Anima Anandkumar

Yisong Yue

Department of Computing and Mathematical Sciences Caltech, Pasadena, CA {rose, stephan, anima, yyue}@caltech.edu

Abstract We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, with long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.

1

Introduction

One of the central questions in science is forecasting: given the past history, how well can we predict the future? In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics. Examples of such systems abound in science and engineering, from biological neural network activity, fluid turbulence, to climate and traffic systems. Since current forecasting systems are unable to faithfully represent the higher-order dynamics, they have limited ability for accurate long-term forecasting. Therefore, a key challenge is accurately modeling nonlinear dynamics and obtaining stable long-term predictions, given a dataset of realizations of the dynamics. Here, the forecasting problem can be stated as follows: how can we efficiently learn a model that, given only few initial states, can reliably predict a sequence of future states over a long horizon of T time-steps? Common approaches to forecasting involve linear time series models such as auto-regressive moving average (ARMA), state space models such as hidden Markov model (HMM), and deep neural networks. We refer readers to a survey on time series forecasting by [3] and the references therein. A recurrent neural network (RNN), as well as its memory-based extensions such as the LSTM, is a class of models that have achieved good performance on sequence prediction tasks from demand forecasting [5] to speech recognition [10] and video analysis [8]. Although these methods can be effective for short-term, smooth dynamics, neither analytic nor data-driven learning methods tend to generalize well to capturing long-term nonlinear dynamics and predicting them over longer time horizons. To address this issue, we propose a novel family of tensor-train recurrent neural networks that can learn stable long-term forecasting. These models have two key features: they 1) explicitly model Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017), TSW.

Figure 1: Tensor-train recurrent cells within a seq2seq model.

Figure 2: Tensor-train unit.

the higher-order dynamics, by using a longer history of previous hidden states and high-order state interactions with multiplicative memory units; and 2) they are scalable by using tensor trains, a structured low-rank tensor decomposition that greatly reduces the number of model parameters, while mostly preserving the correlation structure of the full-rank model. We analyze Tensor-Train RNNs theoretically, and demonstrate that TT-RNNs can forecast more accurately for significantly longer time horizons compared to standard RNNs and LSTMs for real-world applications.

2 2.1

Forecasting using Tensor-Train RNNs Forecasting Nonlinear Dynamics

Our goal is to learn an efficient model f for sequential multivariate forecasting in environments with nonlinear dynamics. Such systems are governed by dynamics that describe how a system state xt ∈ Rd evolves using a set of nonlinear differential equations:     dx d2 x (1) ξ i xt , , 2 ,...;φ = 0 , dt dt i where ξ i can be an arbitrary (smooth) function of the state xt and its derivatives. Such systems exhibit higher-order correlations, long-term dependencies and sensitivity to error propagation, and thus form a challenging setting for learning. Given a sequence of initial states x0 . . . xt , the forecasting problem aims to learn a model f f : (x0 . . . xt ) 7→ (yt . . . yT ) ,

yt = xt+1 ,

(2)

that outputs a sequence of future states xt+1 . . . xT . Hence, accurately approximating the dynamics ξ is critical to learning a good forecasting model f and accurately predicting for long time horizons. In deep learning, common approaches for modeling dynamics usually employ first-order hidden-state models, such as recurrent neural networks (RNNs). An RNN with a single RNN cell recursively computes the output yt from a hidden state ht using: ht = f (xt , ht−1 ; θ),

yt = g(ht ; θ),

(3)

where f is the state transition function, g is the output function and θ are model parameters. RNNs have many different variations, including LSTMs [7] and GRUs [4]. Although RNNs are very expressive, they compute ht only using the previous state ht−1 and input xt . Such models do not explicitly model higher-order dynamics and only implicitly model long-term dependencies between all historical states h0 . . . ht , which limits their forecasting effectiveness in environments with nonlinear dynamics. 2.2

Tensorized Recurrent Neural Networks

To effectively learn nonlinear dynamics, we propose Tensor-Train RNNs, or TT-RNNs, a class of higher-order models that can be viewed as a higher-order generalization of RNNs. We developed TT-RNNs with two goals in mind: explicitly modeling 1) L-order Markov processes with L steps of temporal memory and 2) polynomial interactions between the hidden states h· and xt . 2

First, we consider longer “history”: we keep length L historic states: ht , · · · , ht−L with an activation function f : ht = f (xt , ht−1 , · · · , ht−L ; θ). Early work [6] has shown that with a large enough hidden state size, such recurrent structures are capable of approximating any dynamics. Second, to learn the nonlinear dynamics ξ efficiently, we use higher-order moments to approximate the state transition function. We construct a higher-order transition tensor by modeling a degree p polynomial interaction between hidden states. The TT-RNN with standard RNN cell is defined by: X ht;α = f (W hx xt + Wαi1 ···iP st−1;i1 ⊗ · · · ⊗ st−1;ip ) (4) | {z } i ,··· ,i 1

p

P

where W is a P -dimensional tensor, the i· index the hidden states and P is the polynomial degree. Here, we defined the L-lag hidden state as: sTt−1 = [1 ht−1 . . . ht−L ]. We included the bias unit 1 to model all possible polynomial expansions up to order P in a compact form. The TT-RNN with LSTM cell, or “TLSTM”, is defined analogously as:     it P c = ct−1 ◦ ft + it ◦ gt gt  = σ W hx xt + i1 ,··· ,ip Wαi1 ···iP st−1;i1 ⊗ · · · ⊗ st−1;iP  , t f  ct ◦ ot t | {z } ht = P ot α where ◦ denotes the Hadamard product. We use TT-RNN in both the encoder and decoder under the sequence-to-sequence (Seq2Seq) framework [11], (see Figure 1). The encoder receives the initial states and the decoder predicts xt+1 , . . . , xT . For each timestep t, the decoder uses its previous prediction yt as an input. Unfortunately, due to the “curse of dimensionality”, the number of parameters in W with hidden size H grows exponentially as O(HLP ), which makes the high-order model prohibitively large to train. To overcome this difficulty, we utilize tensor-trains to approximate the weight tensor W. A tensor train model decomposes a P -dimensional tensor W into a network of sparsely connected low-dimensional tensors {Ad ∈ Rrd−1 ×nd ×rd } as: X Wi1 ···iP = A1α0 i1 α1 A2α1 i2 α2 · · · AP α0 = αP = 1 αP −1 iP αP , α1 ···αP −1

as depicted in Figure (2). When r0 = rP = 1 the {rd } are called the tensor-train rank. With tensor-train, we can reduce the number of parameters of TT-RNN from (HL + 1)P to (HL + 1)R2 P , with R = maxd rd as the upper bound on the tensor-train rank. 2.3

Approximation results for TT-RNN

A significant benefit of using tensor-trains is that we can theoretically characterize the representation power of tensor-train neural networks for approximating high-dimensional functions. The following theorem bounds the approximation error using TT-RNN, viewed as a one-layer hidden neural network: Theorem 2.1. Let the state transition function f ∈ Hµk be a Hölder continuous function defined on a input domain I = I1 × · · · × Id , with bounded derivatives up to order k and finite Fourier magnitude distribution Cf . Then a single layer Tensor Train RNN can approximate f with an estimation error of  using with h hidden units: h≤

Cf2 Cf2 (r + 1)−(k−1) (d − 1) + C(k)p−k  (k − 1) 

R where Cf = |ω|1 |fˆ(ω)dω|, d is the size of the state space, r is the tensor-train rank and p is the degree of high-order polynomials i.e., the order of tensor. For the full proof, see the Appendix. From this theorem we see: 1) if the target f becomes smoother, it is easier to approximate and 2) polynomial interactions are more efficient than linear ones: if the polynomial order increases, we require fewer hidden units h. This result applies to the full family of TT-RNNs, including those using vanilla RNN or LSTM as the recurrent cell, as long as we are given a state transitions (xt , st ) 7→ st+1 (e.g. the state transition function learned by the encoder). 3

(a) Genz dynamics

(b) Traffic daily : 3 sensors

(c) Climate yearly: 3 stations

Figure 3: Data visualizations: (a) Genz dynamics, (b) traffic data, (c) climate data.

(a) Genz dynamics

(b) Traffic

(c) Climate

Figure 4: Forecasting RMSE for Genz dynamics and real world traffic, climate time series for varying forecasting horizon for LSTM, MLSTM, and TLSTM.

3

Experiments

We validated the accuracy and efficiency of TT-RNN on 3 datasets: (1) Synthetic Genz dynamics (2) traffic (3) climate time series, visualized in Figure 3. Details are deferred to the Appendix. We compared TT-RNN against 2 set of natural baselines: 1st-order RNN (vanilla RNN, LSTM), and matrix RNNs (vanilla MRNN, MLSTM), which use matrix products of multiple hidden states without factorization [9]). We observed that TT-RNN with RNN cells outperforms vanilla RNN and MRNN, but using LSTM cells performs best in all experiments. We also evaluated the classic ARIMA time series model and observed that it performs ∼ 5% worse than LSTM. For traffic, we forecast up to 18 hours ahead with 5 hours as initial inputs. For climate, we forecast up to 300 days ahead given 60 days of initial observations. For Genz dynamics, we forecast for 80 steps given 5 initial steps. All results are averages over 3 runs. Figure 4 shows the test prediction error (in RMSE) for varying forecasting horizons for different datasets. We can see that TLSTM notably outperforms all baselines on all datasets in this setting. In particular, TLSTM is more robust to long-term error propagation. We observe two salient benefits of using TT-RNNs over the unfactorized models. First, MRNN and MLSTM can suffer from overfitting as the number of weights increases. Second, on traffic, unfactorized models also show considerable instability in their long-term predictions. These results suggest that tensor-train neural networks learn more stable representations that generalize better for long-term horizons.

4

Conclusion and Discussion

In this work, we considered forecasting under nonlinear dynamics.We propose a novel class of RNNs – TT-RNN. We provide approximation guarantees for TT-RNN and characterize its representation power. We demonstrate the benefits of TT-RNN to forecast accurately for significantly longer time horizon in both synthetic and real-world multivariate time series data. In other sequential prediction settings, such as natural language processing, there does not (or is not known to) exist a succinct analytical description of the data-generating process. It would be interesting to further investigate the effectiveness of TT-RNNs in such domains as well. 4

References [1] Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory, 39(3):930–945, 1993. [2] Daniele Bigoni, Allan P Engsig-Karup, and Youssef M Marzouk. Spectral tensor-train decomposition. SIAM Journal on Scientific Computing, 38(4):A2405–A2439, 2016. [3] George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time series analysis: forecasting and control. John Wiley & Sons, 2015. [4] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. [5] Valentin Flunkert, David Salinas, and Jan Gasthaus. Deepar: Probabilistic forecasting with autoregressive recurrent networks. arXiv preprint arXiv:1704.04110, 2017. [6] C Lee Giles, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee, and Dong Chen. Higher order recurrent networks and grammatical inference. In NIPS, pages 380–387, 1989. [7] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [8] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [9] Rohollah Soltani and Hui Jiang. Higher order recurrent neural networks. arXiv preprint arXiv:1605.00064, 2016. [10] Hagen Soltau, Hank Liao, and Hasim Sak. Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. arXiv preprint arXiv:1610.09975, 2016. [11] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.

5

5 5.1

Appendix Theoretical Analysis

We provide theoretical guarantees for the proposed TT-RNN model by analyzing a class of functions that satisfy some regularity condition. For such functions, tensor-train decompositions preserve weak differentiability and yield a compact representation. We combine this property with neural network estimation theory to bound the approximation error for TT-RNN with one hidden layer, in terms of: 1) the regularity of the target function f , 2) the dimension of the input space, and 3) the tensor train rank. In the context of TT-RNN, the target function f (x) with x = s ⊗ . . . ⊗ s, is the system dynamics that describes state transitions, as in (4). Let us assume that f (x) is a Sobolev function: f ∈ Hµk , defined on the input space I = I1 × I2 × · · · Id , where each Ii is a set of vectors. The space Hµk is defined as the set of functions that have bounded derivatives up to some order k and are Lµ -integrable:     X Hµk = f ∈ L2µ (I) : kD(i) f k2 < +∞ , (5)   i≤k

where D(i) f is the i-th weak derivative of f and µ ≥ 0.1 P∞ p Any Sobolev function admits a Schmidt decomposition: f (·) = i=0 λ(i)γ(·; i) ⊗ φ(i; ·), where {λ} are the eigenvalues and {γ}, {φ} are the associated eigenfunctions. Hence, we can decompose the target function f ∈ Hµk as: ∞ X

f (x) =

A1 (α0 , x1 , α1 ) · · · Ad (αd−1 , xd , αd ),

(6)

α0 ,··· ,αd =1

p where {Ad (αd−1 , ·, αd )} are basis functions {Ad (αd−1 , xd , αd )} = λd−1 (αd−1 )φ(αd−1 ; xd )}, satisfying hAd (i, ·, m), Ad (i, ·, m)i = δmn . We can truncate Eqn 7 to a low dimensional subspace (r < ∞), and obtain the functional tensor-train (FTT) approximation of the target function f : fT T (x) =

r X

A1 (α0 , x1 , α1 ) · · · Ad (αd−1 , xd , αd )

(7)

α0 ,··· ,αd =1

. FTT approximation in Eqn 7 projects the target function to a subspace with finite basis. And the approximation error can be bounded using the following Lemma: Lemma 5.1 (FTT Approximation [2]). Let f ∈ Hµk be a Hölder continuous function, defined on a bounded domain I = I1 × · · · × Id ⊂ Rd with exponent α > 1/2, the FTT approximation error can be upper bounded as kf − fT T k2 ≤ kf k2 (d − 1)

(r + 1)−(k−1) (k − 1)

(8)

for r ≥ 1 and lim kfT T − f k2 = 0

r→∞

(9)

for k > 1 Lemma 5.1 relates the approximation error to the dimension d, tensor-train rank r,and the regularity of the target function k. In practice, TT-RNN implements a polynomial expansion of the input states s, using powers [s, s⊗2 , · · · , s⊗p ] to approximate fT T , where p is the degree of the polynomial. We can further use the classic spectral approximation theory to connect the TT-RNN structure with the degree of the polynomial, i.e., the order of the tensor. Let I1 × · · · × Id = I ⊂ Rd . Given a function f and its polynomial expansion PT T , the approximation error is therefore bounded by: 1 A weak derivative generalizes the derivative concept for (non)-differentiable functions and is implicitly defined as: e.g. v ∈ L1 ([a, b]) is a weak derivative of u ∈ L1 ([a, b]) if for all smooth ϕ with ϕ(a) = ϕ(b) = 0: Rb Rb u(t)ϕ0 (t) = − a v(t)ϕ(t). a

6

Lemma 5.2 (Polynomial Approximation). Let f ∈ Hµk for k > 0. Let P be the approximating polynomial with degree p, Then kf − PN f k ≤ C(k)p−k |f |k,µ P Here |f |2k,µ = |i|=k kD(i) f k2 is the semi-norm of the space Hµk . C(k) is the coefficient of the P spectral expansion. By definition, Hµk is equipped with a norm kf k2k,µ = |i|≤k kD(i) f k2 and a P semi-norm |f |2k,µ = |i|=k kD(i) f k2 . For notation simplicity, we muted the subscript µ and used k · k for k · kLµ . So far, we have obtained the tensor-train approximation error with the regularity of the target function f . Next we will connect the tensor-train approximation and the estimation error of neural networks with one layer hidden units. Given a neural network with one hidden layer and sigmoid activation function, following Lemma describes the classic result of describes the error between a target function f and the single hidden-layer neural network that approximates it best: Lemma 5.3 (NN Approximation [1]). Given a function f with finite Fourier magnitude distribution Cf , there exists a neural network of n hidden units fn , such that Cf kf − fn k ≤ √ n where Cf =

R

(10)

|ω|1 |fˆ(ω)|dω with Fourier representation f (x) =

R

eiωx fˆ(ω)dω.

We can now generalize Barron’s approximation lemma 5.3 to TT-RNN. The target function we are approximating is the state transition function f () = f (s ⊗ · · · ⊗ s). We can express the function using FTT, followed by the polynomial expansion of the states concatenation PT T . The approximation error of TT-RNN, viewed as one layer hidden kf − PT T k



kf − fT T k + kfT T − PT T k s (r + 1)−(k−1) ≤ kf k (d − 1) + C(k)p−k |fT T |k (k − 1) s X (r + 1)−(k−1) ≤ kf − fn k (d − 1) + C(k)p−k kD(i) (fT T − fn )k + o(kfn k) (k − 1) i=k s 2 −(k−1) X Cf (r + 1) ≤ √ ( (d − 1) + C(k)p−k kD(i) fT T k) + o(kfn k) (k − 1) n i=k

Where p is the order of tensor and r is the tensor-train rank. As the rank of the tensor-train and the polynomial order increase, the required size of the hidden units become smaller, up to a constant that depends on the regularity of the underlying dynamics f . 5.2

Training and Hyperparameter Search

We trained all models using the RMS-prop optimizer and employed a learning rate decay of 0.8 schedule. We performed an exhaustive search over the hyper-parameters for validation. Table 1 reports the hyper-parameter search range used in this work. Hyper-parameter search range learning rate 10−1 . . . 10−5 hidden state size 8, 16, 32, 64, 128 tensor-train rank 1 . . . 16 number of lags 1...6 number of layers 1...3 number of orders 1...3 Table 1: Hyper-parameter search range statistics for TT-RNN experiments. For all datasets, we used a 80% − 10% − 10% train-validation-test split and train for a maximum of 1e4 steps. We compute the moving average of the validation loss and use it as an early stopping 7

criteria. We also did not employ scheduled sampling, as we found training became highly unstable under a range of annealing schedules. 5.3

Dataset Details

Genz Genz functions are often used as basis for evaluating high-dimensional function approximation. In particular, they have been used to analyze tensor-train decompositions [2]. There are in total 7 different Genz functions. (1) g1 (x) = cos(2πw + cx), (2) g2 (x) = (c−2 + (x + w)−2 )−1 , (3)  2 2 2 0 x>w g3 (x) = (1 + cx)−2 , (4) e−c π(x−w) (5) e−c π|x−w| (6) g6 (x) = cx . For each function, e else we generated a dataset with 10, 000 samples using (11) with w = 0.5 and c = 1.0 and random initial points draw from a range of [−0.1, 0.1]. xt+1 = c−2 + (xt + w)2

−1

,

c, w ∈ [0, 1],

(11)

Traffic We use the traffic data of Los Angeles County highway network collected from California department of transportation http://pems.dot.ca.gov/. The dataset consists of 4 month speed readings aggregated every 5 minutes . Due to large number of missing values (∼ 30%) in the raw data, we impute the missing values using the average values of non-missing entries from other sensors at the same time. In total, after processing, the dataset covers 35 136, time-series. We treat each sequence as daily traffic of 288 time stamps. We up-sample the dataset every 20 minutes, which results in a dataset of 8 784 sequences of daily measurements. We select 15 sensors as a joint forecasting tasks. Climate We use the daily maximum temperature data from the U.S. Historical Climatology Network (USHCN) daily (http://cdiac.ornl.gov/ftp/ushcn_daily/) contains daily measurements for 5 climate variables for approximately 124 years. The records were collected across more than 1 200 locations and span over 45 384 days. We analyze the area in California which contains 54 stations. We removed the first 10 years of day, most of which has no observations. We treat the temperature reading per year as one sequence and impute the missing observations using other non-missing entries from other stations across years. We augment the datasets by rotating the sequence every 7 days, which results in a data set of 5 928 sequences. We also perform a Dickey–Fuller test in order to test the null hypothesis of whether a unit root is present in an autoregressive model. The test statistics of the traffic and climate data is shown in Table 2, which demonstrate the non-stationarity of the time series. Traffic

Climate

Test Statistic 0.00003 0 3e-7 0 p-value 0.96 0.96 1.12 e-13 2.52 e-7 Number Lags Used 2 7 0 1 -3.49 -3.51 -3.63 2.7 Critical Value (1%) Critical Value (5%) -2.89 -2.90 -2.91 -3.70 Critical Value (10%) -2.58 -2.59 -2.60 -2.63 Table 2: Dickey-Fuller test statistics for traffic and climate data used in the experiments.

5.4

Prediction Visualizations

Genz functions are basis functions for multi-dimensional Figure 5 visualizes different Genz functions, realizations of dynamics and predictions from TLSTM and baselines. We can see for “oscillatory”, “product peak” and “Gaussian ”, TLSTM can better capture the complex dynamics, leading to more accurate predictions. 5.5

More Chaotic Dynamics Results

Chaotic dynamics such as Lorenz attractor is notoriously different to lean in non-linear dynamics. In such systems, the dynamics are highly sensitive to perturbations in the input state: two close points 8

(a) g1 oscillatory

(b) g1 dynamics

(c) g1 predictions

(d) g2 product peak

(e) g2 dynamics

(f) g2 predictions

(g) g3 corner peak

(h) g3 dynamics

(i) g3 predictions

(j) g4 Gaussian

(k) g4 dynamics

(m) g5 continuous

(n) g5 dynamics

(p) g6 discontinuous

(q) g6 dynamics

(l) g4 predictions

(o) g5 predictions

(r) g6 predictions

Figure 5: Visualizations of Genz functions, dynamics and predictions from TLSTM and baselines. Left column: transition functions, middle: realization of the dynamics and right: model predictions for LSTM (green) and TLSTM (red).

9

can move exponentially far apart under the dynamics. We also evaluated tensor-train neural networks on long-term forecasting for Lorenz attractor and report the results as follows: Lorenz The Lorenz attractor system describes a two-dimensional flow of fluids (see Figure ??): dx dy dz = σ(y − x), = x(ρ − z) − y, = xy − βz, σ = 10, ρ = 28, β = 2.667. dt dt dt This system has chaotic solutions (for certain parameter values) that revolve around the so-called Lorenz attractor. We simulated 10 000 trajectories with the discretized time interval length 0.01. We sample from each trajectory every 10 units in Euclidean distance. The dynamics is generated using σ = 10 ρ = 28, β = 2.667. The initial condition of each trajectory is sampled uniformly random from the interval of [−0.1, 0.1]. Figure 6 shows 45 steps ahead predictions for all models. HORNN is the full tensor TT-RNN using vanilla RNN unit without the tensor-train decomposition. We can see all the tensor models perform better than vanilla RNN or MRNN. TT-RNN shows slight improvement at the beginning state.

(a) RNN

(b) MRNN

(c) HORNN

(d) TT-RNN

(e) TLSTM

Figure 6: Long-term (right 2) predictions for different models (red) versus the ground truth (blue). TT-RNN shows more consistent, but imperfect, predictions, whereas the baselines are highly unstable and gives noisy predictions.

10

Long-term Forecasting using Tensor-Train RNNs

One of the central questions in science is forecasting: given the past history, how well can we predict the future? In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics.

1MB Sizes 0 Downloads 113 Views

Recommend Documents

longterm tibial nail.pdf
leg length, from Anterior superior iliac spine to medial. malleolus and thigh .... deep vein thrombosis. All these ... Displaying longterm tibial nail.pdf. Page 1 of 5.

Using HMM for Forecasting Stock Market Index
The Australian Stock Exchange Limited (ASX) was formed in 1987 by legislation of the Australian Parlia- ment [2]. The stock markets are now an integral part of.

Forecasting from Large Panels using Robust Factor ...
May 19, 2013 - Belgium; email: [email protected]; phone: ..... the benchmark of standard PCA, irrespective of which measure we use to ...

Financial Time Series Forecasting Using Artificial Neural ... - CiteSeerX
Keywords: time series forecasting, prediction, technical analysis, neural ...... Journal of Theoretical and Applied Finance, Information Sciences, Advanced ... data of various financial time series: stock markets, companies stocks, bond ratings.

Financial Time Series Forecasting Using Artificial ...
data of various financial time series: stock markets, companies stocks, bond ratings. ... asset would perform the best during that month, it would have grown to over $2 ... Visualization and illustration (plot and histogram) of the dataset has an ...

hierarchical forecasting of web server workload using ...
We propose a solution to the web server load prediction problem based on a ... web applications and services, and maintained by a host service provider.

An Approach of Electric Power Demand Forecasting using Data ...
Data-Mining Method: a case study of application of Data-Mining technique to ... long range demand under the influence of population, big plant, building, etc,.

quantifying uncertainty of flood forecasting using data ...
Proc. of the 5th Australian Joint Conf. on AI, World Scientific, Singapore. Solomatine, D.P. (2005). Global Optimization Tool. Available: www.data-machine.com.

Financial Time Series Forecasting Using Artificial Neural ... - CiteSeerX
Faculty of Mathematics and Computer Science. Department of ... Financial prediction is a research active area and neural networks have been proposed as one.

The Longterm Effects of UI Extensions on Employment
Jan 22, 2012 - ployment effects if longer initial spells tend to reduce future incidence of nonemployment. This might arise because of an increase in individual labor supply, for example due to lower income. In addition, with a finite lifetime (or a

longterm-cardiovascular-symptoms-in-a-patient-with-a ...
a samll hiatal hernia and a large air-liquid in the bottom of the right ... were mainly composed of right hepatic flexure and partial gastric ... Coronary Artery .... -symptoms-in-a-patient-with-a-largediaphragmatic-hernia-2161-1076-1000288.pdf.

Longterm effects of rotational prescribed ... - Wiley Online Library
Ecology & Hydrology, Bailrigg, Lancaster LA1 4AP, UK; and. 3. Ptyxis Ecology, Railway Cottages, Lambley,. Northumberland CA8 7LL, UK. Summary. 1. The importance of peatlands is being increasingly recognized internationally for both the conservation o

forecasting volatility - CiteSeerX
Apr 24, 2004 - over a period of years, on the general topic of volatility forecasting for option pricing ... While the returns volatility of the underlying asset is only one of five ... science, particularly among derivatives traders. ..... limited s

Measuring and forecasting S&P 500 index‐futures volatility using high ...
Box 1738, 3000DR Rotterdam, The Netherlands; e-mail: [email protected]. Received May 2000; Accepted September 2001. □ Martin Martens is an ...

forecasting spanish elections
Over the last twenty years, statistical models to forecast election results have received ... data, Spain emerges as a case where individual voting decisions for or .... (1994) have pointed to the “big two” of unemployment and inflation, although

High-performance weather forecasting - Intel
Intel® Xeon® Processor E5-2600 v2 Product Family. High-Performance Computing. Government/Public Sector. High-performance weather forecasting.

High-performance weather forecasting - Intel
in the TOP500* list of the world's most powerful supercomputers, the new configuration at ... be added when the list is next published ... precise weather and climate analysis ... Software and workloads used in performance tests may have been ...

Forecasting transmission congestion
daily prices, reservoir levels and transmission congestion data to model the daily NO1 price. There is ..... This combination is inspired by a proposal by Bates and Granger (1969), who used the mean ..... Data Mining Techniques. Wiley.

Forecasting Spanish Elections
Similarly, in studies using individual-level data, Spain emerges as a .... (1994) have pointed to the “big two” of unemployment and inflation, although others point to the wide-spread ..... The November 2011 Spanish legislative elections: post mo

A GIS-Based Flood Forecasting System - Esri
Mar 17, 2016 - manage, analyze, and serve the massive amounts of data ... merged with data from other Flemish organizations for advanced impact analysis ...