Neural networks are a category of machine learning models which have seen a resurgence since 2006. Deep learning is the recent area of machine learning which combines many neuron layers (e.g. 20, 50, or more) to form a "deep" neural network. In doing so, a deep neural network can accomplish sophisticated classification tasks that classical machine learning models would find difficult.
Keras is a Python package for deep learning which provides an easy-to-use layer of abstraction on top of Theano and Tensorflow.
Import Keras objects:
from keras.models import Sequential
from keras.layers.core import Dense
import keras.optimizers
Create a neural network architecture by layering neurons. Define the number of neurons in each layer and their activation functions:
model = Sequential()
model.add(Dense(4, activation='relu', input_dim=2))
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='softmax'))
Choose the optimizer, i.e. the update rule that the neural network will use to train:
optimizer = keras.optimizers.SGD(decay=0.001, momentum=0.99)
Compile the model, i.e. create the low-level code that the CPU or GPU will actually use for its calculations during training and testing:
model.compile(loss='binary_crossentropy', optimizer=optimizer)
The operation XOR is defined as: XOR(x, y) = 1 if x != y else 0
Synthesize training data for the XOR problem.
X_train = numpy.random.randn(10000, 2)
print(X_train.shape)
print(X_train[:5])
Create target labels for the training data.
y_train = numpy.array([
[float(x[0]*x[1] > 0), float(x[0]*x[1] <= 0)]
for x in X_train
])
print(y_train.shape)
y_train[:5]
Plot the training data:
Finally, train the model!
results = model.fit(X_train, y_train, epochs=200, batch_size=100)
Plot the loss function as a function of the training iteration number:
plt.plot(results.history['loss'])
Create test data:
X_test = numpy.random.randn(5000, 2)
Use the trained neural network to make predictions from the test data:
y_test = model.predict(X_test)
y_test.shape
Let's see if it worked: