Prediction of Commodity Prices in Rapidly Changing Environments Sarunas Raudys1,2 and Indre Zliobaite1 1

2

Dept. of Informatics, MIF, Vilnius University Institute of Mathematics and Informatics, Naugarduko st. 24, Vilnius 03225, Lithuania [email protected], [email protected] Abstract. In dynamic financial time series prediction, neural network training based on short data sequences results to more accurate predictions as using lengthy historical data. Optimal training set size is determined theoretically and experimentally. To reduce generalization error we: a) perform dimensionality reduction by mapping input data into low dimensional space using the multilayer perceptron, b) train the single layer perceptron classifier with short sequences of low-dimensional input data series, c) each time initialize the perceptron with weight vector obtained after training with previous portion of the data sequence, d) make use of useful preceding historical information accumulated in the financial time series data by the early stopping procedure. Keywords: Classification, changing environments, commodity prices, forecasting, neural networks.

1 Introduction into the Problem A characteristic feature of current research in economic and financial data mining methods is the environment dynamics. The changes in nature of the data could be minor fluctuations of the underlying probability distributions, steady trends, random or systematic, rapid substitution of one prediction task with another [1]. Therefore, in the financial prediction task, the algorithms ought to include the means to reflect the changes, to be able to operate in new, unknown environments, to adapt to sudden situational changes, to compete with other methods and to survive [2]. If only very short training sequences could be allowed, one needs to reduce both dimensionality of the data (number of features) and complexity of the training algorithm. Moreover, one needs to find ways to utilize useful information accumulated in the times series history. The objective of the present research was to analyze the theoretical background of factually unsolvable financial time series prediction task, suggest methods to achieve more reliable forecasting results in situations when the environment is changing permanently. We reformulate forecasting task as pattern classification problem in present paper.

2 Originality and Contribution Analysis of financial time series usually is aimed to predict the price of a given security in the future. What is foremost important in this task is the direction of the S. Singh et al. (Eds.): ICAPR 2005, LNCS 3686, pp. 154 – 163, 2005. © Springer-Verlag Berlin Heidelberg 2005

Prediction of Commodity Prices in Rapidly Changing Environments

155

price change (i.e. price goes up or down as compared to today’s value) prior to absolute value of the prediction error (i.e. by how many basis points the price changes). Wrong direction of prediction for investors concerned might bring significantly higher losses than absolute error in predicted security price, having right predicted direction of the price change. We analyze financial time series prediction task delimiting it to classification task, i.e. classifying the trading days into days of increase and days of decrease in security price as compared to previous periods. Such delimitation serves for two purposes: 1) analysis gets less complicated and the results achieved are more convenient to interpret; 2) analysis gives more value added in practice. We solve financial time series prediction task using classification tools, therefore, hereinafter we refer to prediction when addressing the task and we refer to classification when discussing the tools. To develop classification and prediction algorithms aimed to work in changing environments training of the algorithms ought to be based on short learning sequences. For that reason, the algorithms should operate in a low dimensional space and be able to make use of partially correct historical information accumulated in the past. To achieve this goal, data is mapped into a low dimensional space by wrapper approach based multilayer perceptron MLP training firstly considered in [3]. To have simple nonlinear classifier for final classification, second order polynomial features are formed in the new feature space and single layer perceptron (SLP) based classifier is trained starting from weight vector obtained from previous portion of the time series data. To save useful information extracted from previous data the training process is terminated far beyond minimum of current cost function is obtained. By analysis of training dynamics of standard Fisher linear discriminant function a presence of minimum of training sequence length is demonstrated theoretically if data is changing stochastically. The optimal length of training data is inversely proportional to intensity of data changes. Usefulness of the suggested forecasting methodology is demonstrated by solving commodity prices forecasting task from stock market statistical data.

3 Researches in the Field and Competing Techniques Financial forecasting task is of great interest to businessmen as well as to academicians. The reader is referred to excellent review [4], where several methods for forecasting using artificial neural networks (ANN) are compared, to a large extent applicable to various financial time series. The environment dynamics and researches in the field performed already for several years are considered in [2, 4, 5]. Kuncheva [1] states classification of possible changes in the class descriptions into: random noise, random trends, random substitutions and systematic trends and suggests employing different strategies for building the classifier, depending on the type of changes. To select small number of attributes for training and performing forecasting task, typically one uses fixed length of historical data and reduces the number of input factors. Moody states and experimentally shows in [6], that there exists an optimum for training window length at which the test error is minimal. Sliding window

156

S. Raudys and I. Zliobaite

approach for commodity price prediction was used by Kohzadi et al [7] back in 1996. Fieldsend and Singh in [8] give a novel framework for implementing multi-objective optimization within evolutionary neural network domain for time series prediction. Sound results were achieved. Unfortunately, due to huge profit opportunities involved in financial time series forecast, a disclosure of the most significant findings in this field is limited particularly.

4 Proposed Method In this section we present theoretical foundation of pattern classification algorithm used to forecast non-stationary financial time series based on the knowledge that excessive increase in training set length can deteriorate generalization error when algorithm is applied to forecast future data. 4.1. Training Set Length and Generalization Error Consider a p-dimensional two category classification problem. Suppose the classes are Gaussian with different mean vectors µ1, µ2 and common pooled p×p covariance matrix Σ = ((σsr)). Then asymptotically optimal is Fisher linear discriminant function -1 gF(X) = X T wˆ F +w0, where wˆ F = Σˆ ( µˆ 1 - µˆ 2 ),w0= -½ ( µˆ 1 + µˆ 2 )T wˆ F are weighs of sample based DF and classification of vector X is performed according to a sign of discriminant function (DF). Sample mean vectors and covariance matrix are estimated from

training

set

vectors:

µˆ 1 =

N −1

∑ X 2 j +i

1 N j =0

and

µˆ 2 =

N −1

∑ X 2 j +2 ;

1 N j =0

2 N −1

1 T Σˆ = n ∑ ∑ (X 2 j+i − µˆ i )(X 2 j+i − µˆ i ) , where N is number of training vectors in one i=1 j =0

class, n=2N; (for a general introduction into statistical pattern recognition see e.g. [9, 10]). If true distributions of input vectors, X, are Gaussian with parameters µ1, µ2, Σ, expected probability of misclassification (mean generalization error) is [10] F ε N ≈ Φ{ − ½ δ (TM TΣ ) ½} 2

-1

(1) 2

where δ = (M)T Σ (M) is a squared Mahalanobis distance, the term TM = 1+ 4p/(δ n) arises due to inexact sample estimation of the mean vectors of the classes and the term TΣ =1+ p/(n-p) arises due to inexact sample estimation of the covariance matrix. M = (m1, …, ms)Τ = µ1 - µ 2 . Equation (1) shows that with an increase in training set size, N, the mean generalization error decreases monotonically. If Fisher discriminant function is utilized to classify time series data which is varying in time, in estimation of parameters µ1, µ2, Σ, the changes of the data cause additional inaccuracies that are accumulating with time. Thus, with an increase in training set size the mean

Prediction of Commodity Prices in Rapidly Changing Environments

157

generalization error decreases at first, reaches a minimum and then starts increasing. We will demonstrate this phenomenon theoretically. The simplest, a random drift, model of changes in distribution of X will be considered. Here we suppose that after each time moment, dependent Gaussian random variables, ς2j+i = ∑ sj=0 ξ 2 s + i (i =1,2; j=0,1,…, N-1} are added to components of Xij. Random contributions, ξ2s+i, are accumulating. Then 2 N −1

1 ∗ Σˆ = n ∑ ∑ (X 2 j+i − µˆ i + (I ζ 2 j +i i=1 j =0



1

N

N −1

∑ ζ2 j +i ))(X 2 j+i − µˆ i + (I ζ 2 j +i − j =0

1

N

N −1

T ∑ ζ2 j+i )) ,

j =0

where I stands for p-dimensional column vector composed of ones, I = (1, 1, … , 1)T, and ξ2s+i ~ N(0,α2)) are independent. After tedious combinatory and statistical analysis utilizing first terms of Taylor series expansion we obtain expectations of the means and covariance matrix with respect to random variables ς2j+i: ∗

(2)

E µˆ i = µˆ i , E Σˆ = Σˆ + E × β, *

where E stands for p×p matrix composed of ones and β = ⅓ α2N. Using (2), for small α and N we find effective Mahalanobis distance T −1 T −1 −1 −1/ 2 δ∗= ( M (Σ + Eβ) M )( M (Σ + Eβ) Σ (Σ + Eβ) M )

(3)

Then for small α and N we become aware that we can use Eq. (2) to calculate approximate values for mean generalization error. Note that both the effective Mahalanobis distance and mean generalization error depend on all components of F

vector M and matrix Σ. In Fig. 1a we present a graph ε N as a function of N. Theoretical calculations confirm that statistical dependence between subsequent components of multivariate time series deteriorates generalization performance and diminishes optimal training length. 4.2 Effect of Initialization and Early Stopping in Changing Environments Analysis performed in previous sections has shown that the length of learning sequence should be short, if time series characteristics are changing all the time. It means that in training we ought to reject old data. Old data, however, may contain information, which if correctly used may appear useful. One of the ways to save previously accumulated data in situations of permanent environmental changes is to start training from the weight vector obtained with previous portion of the data, which is not precise enough to be used for training. It was demonstrated that information contained in the initial weight vector can be saved if training of the perceptron is stopped in a right time [11]. Due to the fact that in perceptron training we can obtain a sequence of the classifiers of varying complexity [10], and due to unknown accuracies of initial and final weight vectors the early stopping should be performed in empirical way.

158

S. Raudys and I. Zliobaite 0.45

1

b

a 0.4

0.2

α=0.1

testing error

generalization error

0.25

0.15 0.1

0.3 0.25

2

0.05 0

0.35

0.2

α=0.02 0

20

40

60 80 training set length

100

error=0,203

0.15

0

1

2

3 4 5 training set length, years

6

7

F

Fig. 1. Generalization errors ε N , as a function of training set length N; a) the Fisher classifier,

theoretical result (p=10; ms=1.2526; σss =1; σ1s = 0.1, if r =1; σrs = 0, if r > 1), b) MLP, pork price forecasting task. The bold line in graph b denotes approximation of the results.

4.3 Two Stage Pattern Classification Algorithm The algorithm used to solve commodity price forecasting task is divided into to stages (see Table 1). Table 1. The algorithm proposed 1st stage 2nd stage

Step 1 Step 2 Step 3 Step 4

Data preparation (get TR and TE) Initial dimensionality reduction using MLP (TR,TE->TR3,TE3) Derivation of polynomial features (TR3, TE3 ->TR9,TE9) SLP training on TR9, testing on TE9 using “sliding window” approach

In the next section the underlying considerations of stage 1 and stage 2 are discussed. Data preparation and the flow of training and testing are presented further in Simulation experiment, Section (5). The First Stage of the Algorithm. In order to design functional classification algorithm capable to work well if trained on very short non-stationary time series data, we have to reduce the number of features at first, as large dimensionality of input vectors increases a need of training samples. For dimensionality reduction we selected multilayer perceptron (MLP) used to map the data into a low dimensional space as suggested in [3]. This simple feature extraction (FE) method performs linear feature extraction with nonlinear performance criterion. The r new features: z1, z2,…, zr, are linearly weighted sums, zs = ∑ j =1 wsj x j , (s = 1, 2,…, r) of p inputs (r < p) calculated in r hidden neurons. The new extracted feature space depends on minimization criterion used in training, i.e. on complexity of decision boundary. Thus, the number of hidden units of MLP (the number of new features, r) are affecting the complexity of feature extraction procedure. In spite of simplicity, it is very powerful feature extraction method, which allows make use of discriminatory information contained in all input features. Nevertheless, in finite training sample size situations, one cannot use many hidden units. After several p

Prediction of Commodity Prices in Rapidly Changing Environments

159

preliminary experiments performed with data TR (every second vector was used for training the MLP classifier and remaining vectors of TR were used for validation, determining the number of training epoch, tL), three hidden neurons we selected, i.e. r = 3. Then the MLP, trained on all data TR, after tL epochs produced new threedimensional data sets, TR3 and TE3. It was the first stage of the algorithm. The Second Stage of the Algorithm. Following to the considerations presented in previous sub-section, in the second stage of the algorithm, we have to apply adaptive classifier capable to make use of the historical data series information accumulated in starting weight vector wstart. Analysis showed that in the second stage, non-linear boundary would be preferable. The MLP classifier can be easily trapped into bad local minima. Therefore, we have chosen SLP classifier, performed in polynomial 2nd order feature space derived from TR3 and TE3: instead of three features, z1, z2, z3, we used nine new ones: z1, z2, z3, (z1)2, (z2)2, (z3)2, z1z2, z1z3, z2z3. Thus the SLP classifier was trained in 9dimensional (9D) space. To save possibly useful information contained in starting 10-dimensional weight vector wstart, we had to stop training early, much earlier as minimum of cost function was obtained. In practical application of this approach, in the first iteration cost function to be minimized could be very large. Therefore, each new training session started from scaled initial weight vector κ×wstart, where parameter κ was determined from a minimum of cost function estimated from the testing set after recording current test results.

5 Simulation Experiment Data Used. To demonstrate usefulness of the algorithm described above we analyzed a real word 5-dimensional financial data recorded in a period from November 1993 till January 2005. The price of Pork Bellies was chosen as forecasting target. The following variables were used as inputs for the algorithm: x1 - spring wheat, x2 - raw cane sugar (as other eatable commodities), x3 - gold bullion (as alternative currency), x4 - American Stock Exchange (AMEX) oil price index (supposed to be influential for eatable commodity prices due to transportation and techniques) and x5 - Pork Bellies price. Input data vectors were formed using four days price history of each of the presented variables. 20-dimensional data matrix X was split into training data TR (first 1800 days history), and testing data TE (last 1100 days). As we are dealing with financial variables, a highlight from theory in finance needs to be addressed. The Efficient Market Hypothesis [12] states that in efficient market, the prices reflect all the information available from the market. Thus, statistically significant forecast can be made only in situations where either the market is not efficient enough in terms of information flow or the problem solving method is unexpected for other participants. Therefore, we constructed original index,

160

S. Raudys and I. Zliobaite

Yt = ( Bt + 2 + Bt +1 ) /( Bt + Bt −1 ) − ( Bt +1 + Bt ) /( Bt −1 + Bt −2 ) , where Bt is Pork Bellies price at day t, formed from historical data. Yt is our forecasting target. Such index was not used by other researchers. We formulate forecasting task as pattern classification problem. We calculate Yt for all training data TR at first. Then we selected two threshold values Ymax and Ymin in a way that Ymax is the smallest value from 25% of the highest Yt, which were calculated from training set TE. Similarly, Ymin is the highest value from 25% of the smallest Yt, which were calculated from training set TE. This way we split training, as well as testing data (using the same thresholds, but the thresholds were determined only from training data) into categories C1 (Yt > Ymax), C2 (Yt < Ymin) and Caverage – the remaining 50% of data. The first two classes C1 and C2, were used to develop classification algorithm based on the theory presented above. Experimental Design. 9-dimensional two category training and test data sets, TR9 and TE9, composed of 450+450 and 199+184 vectors were obtained as described in Section 4.3. Tuning of number of hidden units, MLP initialization, optimal number of MLP training epochs were determined on pseudo-validation sets formed from each training subset by means of colored noise injection [10, 13]. Data TE9 was split into 44 not intersecting blocks composed of 25 consecutive days. In each iteration, the training subset consisted of L days ending one day before a current testing block starts (depending on L, the training blocks intersected by 0 - 98%; at optimum, Lopt=210, we had 88% intersection). Note that in our analysis we skipped the middle pattern class, Caverage. Therefore, numbers of vectors from classes C1 and C2, in each single block were different. Training was performed if there were more than two vectors of each class (C1 and C2) in current training block. If no testing vectors were in the current testing block training, we used κ=1. For training the neural network and testing the results, sliding window approach was used. The SLP was trained on one subsequent 9-dimensional block data, starting from weight vector wstart obtained after training with data of preceding block. The supplementary training before testing on each of 44 testing blocks was essential, as the environment affecting commodity prices is changing very rapidly and training takes place on short history. Results. The core simulations presented in this paper are stated in Table 2. In Fig. 1b we have an influence of training set length (in years) on average generalization error in classifying 383 two category (C1 and C2) 20D test vectors of TE by means of MLP with 3 hidden units: 1100 days for testing (44 blocks, 25 days each). We see several minima in smoothed graph (bold curve). Presence of several minima is caused by the fact that real world environmental changes do not follow simplified assumptions used for derivation analytical formula in Section 4.1. The first convex sector in averaged graph does not lead to optimum test error, as there is a lack of training vectors as compared to number of features (p=20) at that short training window.

Prediction of Commodity Prices in Rapidly Changing Environments

161

Table 2. Simulation experiments Fisher classifier, theoretical result MLP, pork price forecasting task

Fig. 1a

SLP, pork price forecasting task

Fig. 2a

SLP, Pork price forecasting task

Fig. 2b

Fig. 1b

Generalization error, as a function of training set length. Experiment with artificial data. Generalization error, as a function of training set length. Experiment on real data repeated 179 times having different training set length L (20…1800). Without feature reduction. MLP used for training. Generalization error, as a function of training set length. Experiment on real data repeated 179 times having different training set length L (20…1800). Feature reduction using MLP. Final training using SLP. Generalization error, as a function of number of training epochs. Experiment on real data repeated 150 times having different number of training epochs (1…150). Feature reduction using MLP. Final training using SLP.

In Fig. 2 we have similar graphs performed with SLP in 9-dimensional feature space. In this case, two types of experiments were performed with the test set data TE9: a) for fixed number of training epochs, t*, (this number was evaluated during additional experiments), training window, L, vas varying in interval [20, 1800] days; b) training window length, L, was fixed (it was also obtained in additional experiments) and a number of training epochs varied in interval [1, 150]. 0.22

0.22 0.21 testing error

testing error

0.21 0.2 0.19 0.18 0.17 0.16

training set size < 1 year =210 days

a

epochs=19

b

0.2 0.19 0.18 0.17 error=0,168

error=0,168 0

1

2 3 4 5 training set size, years

6

7

0.16

0

50 100 number of training epochs

150

Fig. 2. Classification error as a function of window size (in years) (a) and number of epochs (b). The bold line denotes approximation of the results.

We see that minimum of the generalization error in 9D space (16.8%) is notable lower as that in original, 20D feature space (20.3%). It means that our strategy to extract features and to train SLP in low-dimensional space with initialization with previous weight vector and early stopping appeared fruitful. We have to admit that in experiments with SLP in 9D space the test data participated in determining optimal length of training history and the number of training epochs. It is a shortcoming. The only consolation is that in nowadays, the world trade market changes so rapidly that

162

S. Raudys and I. Zliobaite

any optimality parameters determined six years ago does not fit nowadays. Similar gains in accuracy were obtained in forecasting experiments of oil and sugar prices.

6 Implementation In dynamic financial time series prediction, neural network training based on short data sequences results to more accurate result as using lengthy historical data. The optimal training set size is stated theoretically and experimentally. To reduce generalization we suggest: a) to map the data into low dimensional space using multilayer perceptron, b) to make final forecasting by single layer perceptron classifier trained in low dimensional space, c) to initialize SLP with weight vector obtained after training the perceptron with previous portion of data sequence, d) save useful preceding historical information by early stopping. The proposed methodology was tested on other financial time series as well, in particular, for oil and sugar forecasting task, the gains achieved supported the conclusions made here. As the objective of the present research is to analyze methodological issues rather than detailed presentation of the results with other commodity price forecasting are left out of the scope of the paper. When forming initial data classes C1 and C2, 50% of the data was omitted due to business reasons. Experiments showed that taking all the data or different percentage of it to classes C1 and C2 influences absolute testing error, but gives the same principal results, i.e. gain from dimensionality reduction, using MLP, gain from scaled preservation of information from previous trainings, presence of optimum lengths of learning sequences.

7 Conclusions Our experimental results confirm that in practical utilization of the forecasting approach developed in the paper: 1. Theoretically derived optimal length of time series history to be utilized to improve the forecasting algorithm do not fit for practical application since the parameter of the environmental changes are not known and since in real world, the changes follow much more complicated laws (if they could exist in reality). 2. Forecasting strategy consisting on a) reduction the number of features by utilizing long history of multivariate financial data series by means of MLP based linear feature extraction, b) training the SLP classifier in low-dimensional polynomial features space starting from scaled previous weight vector and early stopping could become a good alternative to existing forecasting methods. In each practical application, optimal length of times series history and optimal perceptron’s training times have to be determined from latest data history and additional non-formal end users information. Possible solution to find these values is to solve several forecasting problems simultaneously.

Prediction of Commodity Prices in Rapidly Changing Environments

163

Acknowledgements The authors thank Dr. Aistis Raudys for sharing the Matlab codes, useful and challenging discussions in the field.

References [1] Kuncheva, L. Classifier ensembles for changing environments. Lecture Notes in Artificial Intelligence, Springe-Verlag, 3077: 1-15, 2004. [2] Raudys S. Survival of intelligent agents in changing environments. Lecture Notes in Artificial Intelligence, Springer-Verlag, 3070: 109-117, 2004. [3] Raudys, A. and J.A. Long (2001). MLP based linear feature extraction for nonlinearly separable data. Pattern Analysis and Applications, 4(4): 227-234, 2001. [4] Huang, W., Lai, K.K., Nakamori, Y., Wang S. Forecasting foreign exchange rates with artificial neural networks: a review. Int. J. of Inf. Technology & Decision Making, 3(1) 1: 145-165, 2004. [5] Yao, X. (1999). Evolving ertificial neural networks. Proceedings of IEEE, 87( 9): 14231447. [6] Moody, J., Economic Forecasting: Challenges and Neural Network Solutions. International Symposium on Artificial Neural Networks, Hsinchu, Taiwan, 1995. [7] N. Kohzadi, M. Boyd, B. Kermanshahi and I. Kaastra . A comparison of artificial neural network and time-series models for forecasting commodity prices. Neurocomputing, 10: 169-181, 1996. [8] Feldsend, J.E. and Singh, S. Pareto evolutionary neural networks. IEEE Trans. Neural Networks, 16(2): 338-353, 2005. [9] Duda, P.E., Hart, R.O. and Stork, D.G. Pattern Classification. 2nd ed. Wiley, NY, 2000. [10] Raudys, S. (2001). Statistical and Neural Classifiers: An integrated approach to design. Springer, NY, 2001. [11] Raudys S. and Amari S. Effect of initial values in simple perception. In Proc. 1998 IEEE World Congress on Comput. Intelligence, IEEE Press, Vol. IJCNN’98: 1530−1535, 1998. [12] Fama, E.F. Efficient capital markets: A review of theory and empirical work. Journal of Finance, 25: 383-417, 1970. [13] Skurichina M., Raudys S., Duin R.P.W. K-nearest neighbors directed noise injection in multilayer perceptron training, IEEE Trans. on Neural Networks, 11: 504−511, 2000

Prediction of Commodity Prices in Rapidly Changing ...

the data (number of features) and complexity of the training algorithm. Moreover, one needs to find ..... into training data TR (first 1800 days history), and testing data TE (last 1100 days). As we are .... business reasons. Experiments showed ...

221KB Sizes 0 Downloads 200 Views

Recommend Documents

Equilibrium Commodity Prices with Irreversible ...
Keywords: Commodity prices, Futures prices, Convenience yield, Investment, Ir- ...... define the risk-free money-market account whose price is Bt. The process for ...

Effects of High Commodity Prices on Cropping ...
A Positive Mathematical Programming (PMP) model (Howitt, 1995) was developed and calibrated to land- and water-use data in Sheridan County for a base period of 1999-2003. The. PMP approach produces a constrained nonlinear optimization model that mimi

The Consumption Terms of Trade and Commodity Prices
trade shares helps us isolate the source of a nationps terms of trade varia' tion in the ..... estimates are inflation rates, in U.S. dollars, of a particular good, i, Api,t,.

The Stochastic Behavior of Commodity Prices ...
Jun 28, 2007 - prices that take into account mean reversion, in terms of their ability to price ...... yield is high (25 percent per year) because we are estimating ...

Commodity Prices, External Imbalances, and Banking ...
Motivated by this recent trend we develop an empirical model to predict banking .... 2016) or provide evidence suggesting that banking crises follow on from credit booms or a sharp increase in ..... World Shocks, World Prices, And Business.

The Changing Effects of Energy Prices on Economic Activity and ...
The Changing Effects of Energy Prices on Economic Activity and Inflation.pdf. The Changing Effects of Energy Prices on Economic Activity and Inflation.pdf.

OPTIMAL RESOURCE PROVISIONING FOR RAPIDLY ...
OPTIMAL RESOURCE PROVISIONING FOR RAPIDL ... UALIZED CLOUD COMPUTING ENVIRONMENTS.pdf. OPTIMAL RESOURCE PROVISIONING FOR ...

Setting up of a Commodity Exchange in Mauritius - Financial Services ...
Feb 20, 2008 - Initially, trading on the Commodity Exchange will be in precious metals, base metals, energy, green contracts and Agri Commodities.

Internet finance growing rapidly in China - Nomura Research Institute
In addition to being a new business, Internet finance is a. 5) China Monetary Policy Report. Quarter Two, 2013. The report noted a number of advantages of. Internet ... Inquiries to : Financial Technology and Market Research Department.

Border Prices and Retail Prices
May 31, 2011 - 4 In their example complete pass-through would be 100% pass through. ..... telephones and microwave ovens. ... There are, for example, the wedges associated with small-screen .... has gone out of business; (2) the BLS industry analyst,

The pricing of commodity contracts
Typically a margin (collateral) is posted with the 'broker' by each party, which is meant to be ... R is the return on short-term interest-bearing securities,. ˜. RM.

Unsupervised natural experience rapidly alters ...
Mar 10, 2010 - ... be found in the online. Updated information and services, ... 2 article(s) on the ISI Web of Science. cited by ... 2 articles hosted by HighWire Press; see: cited by ... of the cell lines were in good concordance with data found by

Rapidly converging methods for the location of ...
from a sign problem in the case of fermionic or frustrated systems and does not reach the level of accuracy of the. DMRG. Very recently, there have been ...

A theory of fluctuations in stock prices - Semantic Scholar
The distribution of price returns is studied for a class of market models with .... solid lines represent the analytical solution of Eq. (11), and the data points .... Dependence on k of the size of the Gaussian region at the center of the distributi

1 ON THE EFFECTS OF SUGGESTED PRICES IN ...
2 Oct 2011 - Phone: +43-1-4277-37438, Fax: +43-1-. 4277-9374, E-mail: .... would not reduce uncertainty of stations compared to the situation without suggested prices since the spot market price is already ... companies (BP, Esso, Shell, Total, and T

humans rapidly learn grammatical structure in a new musical scale
provide evidence that a domain-general statistical learn- ing mechanism may account for much of the human appreciation for music. Received January 15, 2009 ...

RoboCupRescue Rapidly Manufactured Robot CHALLENGE ... - Groups
LI. 31.25. 4. 0. 0 39.216. 4. 37.5. 3. 0. 0. 0. 0. 6.25. 0.2 6.6667. 0.2. 30. 0.6. 20. 0.4. 100. 4. 75. 3 345.88. ME. 23.438. 3 55.556 0.6667 27.451. 2.8. 35. 2.8 23.529.

Prediction of in vivo intestinal absorption enhancement ...
Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/jps.20309 ... 172-2214682; Fax: 91-172-2214692;. E-mail: [email protected]). Journal of ...... (Madin-Darby canine kidney) cells: A tool for mem-.

Recent Progress in the Computational Prediction of ...
Oct 25, 2005 - This report reviews the current status of computational tools in predicting ..... ously validated, have the potential to be used in early screen- ... that all the data sets share a hidden, common relationship ..... J Mol Model (Online)