This content originally appeared on Level Up Coding - Medium and was authored by Yuli Vasiliev

Neural networks have been successfully used to tackle a wide range of challenging problems in science and engineering. The example in this article uses Keras to build a neural network that performs multi-output regression for stock and gold price forecasting. You’ll configure and train this neural network, and then, evaluate the accuracy of the model using Mean Absolute Error (MAE), as the simplest measure of forecast accuracy.
Using Google Colab
Perhaps the simplest way to implement a neural network with Keras is to run a notebook in Google Colab, in which you can organize your code into several code cells. You won’t need to install Keras in the notebook, as well as NumPy that is also used in the example. However, you still need to install yfinance and quandl libraries to get stock and gold prices, respectively.
!pip install yfinance
!pip install quandl
After that, you can use these libraries to obtain data for training and evaluating the neural network.
Getting the Data
In the following code cell, you obtain historical stock prices for a certain ticker (Tesla, in this example) for the last five years (of course, you might choose another ticker):
import yfinance as yf
tkr = yf.Ticker('TSLA')
hist = tkr.history(period="5y")
import pandas_datareader.data as pdr
from datetime import date, timedelta
end = date.today()
start = end — timedelta(days=5*365+1)
index_data = pdr.get_data_stooq('^SPX', start, end)
df = hist.join(index_data, lsuffix = '_tkr', rsuffix = '_idx')
df = df[['Close_tkr','Volume_tkr','Close_idx','Volume_idx']]
As you might guess from the last line in the above cell, you’re going to use only the close prices and sales volumes figures of the ticker for the further analysis.
Then, you get the gold prices for this same period. Before you can do that, you will need to create a Quandl account to get a free API token:
import quandl
gold_price = quandl.get("LBMA/GOLD",start_date=start, end_date=end, authtoken="your_quandl_token")
df = df.join(gold_price['USD (AM)'])
df = df.rename(columns={'USD (AM)': 'Gold'})
Now that you’ve gotten the data, you can generate the features and the output for the network to be trained.
Generating the Features from the Data
In the next cell, you preprocess the data before training the network, performing feature scaling. What you actually do here, you just calculate a percentage change for each indicator compared to the previous day:
import numpy as np
df['priceRise_tkr'] = np.log(df['Close_tkr'] / df['Close_tkr'].shift(1))
df['volumeRise_tkr'] = np.log(df['Volume_tkr'] / df['Volume_tkr'].shift(1))
df['priceRise_idx'] = np.log(df['Close_idx'] / df['Close_idx'].shift(1))
df['volumeRise_idx'] = np.log(df['Volume_idx'] / df['Volume_idx'].shift(1))
df['priceRise_gold'] = np.log(df['Gold'] / df['Gold'].shift(1))
df = df.dropna()
Then, you generate the output by taking the next day’s percentage change of the price indicators:
df['tkrPred'] = df['priceRise_tkr'].shift(-1)
df['goldPred'] = df['priceRise_gold'].shift(-1)
df['tkrPred'] = df['tkrPred']*100
df['goldPred'] = df['goldPred']*100
df = df.dropna()
Note that you scale the values in the tkrPred and goldPred columns to real percentages rather than real numbers in the range between -1.0 and 1.0.
Now you need to convert the df DataFrame with the features and the target variables into respective NumPy arrays to be used for model training and evaluating:
features = df[['priceRise_tkr','volumeRise_tkr','priceRise_idx','volumeRise_idx','priceRise_gold']].to_numpy()
features = np.around(features, decimals=2)
target = df[['tkrPred','goldPred']].to_numpy()
Configuring the Model
In this example, you’re going to create a neural network, which adheres to a simple Sequential model with a stack of Dense layers:
from keras.models import Sequential
from keras.layers import Dense
You get the model in a separate function:
def get_model(n_inputs, n_outputs):
model = Sequential()
model.add(Dense(n_inputs, input_dim=n_inputs, activation='relu'))
model.add(Dense(100, kernel_initializer='he_uniform', activation='relu'))
model.add(Dense(n_outputs))
model.compile(loss='mae', optimizer='adam')
return model
As you can see, the model has three layers: input, hidden, and output. You define mae as the loss function and the adam optimizer.
Training and Evaluating the Model
Here, you train and evaluate the model using the repeated K-fold cross validator that returns different results for each call of split:
from sklearn.model_selection import RepeatedKFold
n_inputs, n_outputs = features.shape[1], target.shape[1]
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
for train_ix, test_ix in cv.split(features):
X_train, X_test = features[train_ix], features[test_ix]
y_train, y_test = target[train_ix], target[test_ix]
model = get_model(n_inputs, n_outputs)
model.fit(X_train, y_train, verbose=0, epochs=20)
mae = model.evaluate(X_test, y_test, verbose=0)
print('>%.3f' % mae)
Here is what the output might look like:
>1.540
>1.527
>1.551
>1.606
>1.481
>1.495
>1.731
>1.380
…
Neural Networks for Market Indicators Analysis was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.
This content originally appeared on Level Up Coding - Medium and was authored by Yuli Vasiliev

Yuli Vasiliev | Sciencx (2021-06-26T23:59:19+00:00) Neural Networks for Market Indicators Analysis. Retrieved from https://www.scien.cx/2021/06/26/neural-networks-for-market-indicators-analysis/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.