Evaluate and Diagnose Deep Learning Models in Keras

There are a lot of decisions to make when designing and configuring a deep learning model. Most of these decisions must be resolved empirically through trial and error and evaluating them on real data.

This includes high-level decisions like the number, size and type of layers in your network. It also includes the lower level decisions like the choice of loss function, activation functions, optimizer batch size, and number of epochs.

As such, it is critically important to have a robust way to evaluate and diagnose the performance of your neural networks and deep learning models.

Data Splitting

Sometimes we large amounts of data and the complexity of the models require very long training times. As such, it is typical to use a simple separation of data into training and test datasets or training and validation datasets.

Keras provides $2$ convenient ways of evaluating your deep learning models this way:

  1. Use an automatic verification dataset.
  2. Use a manual verification dataset.

Use a Automatic Verification Dataset

Keras can separate a portion of your training data into a validation dataset and evaluate the performance of your model on that validation dataset each epoch. You can do this by setting the validation_split argument on the fit() function to a percentage of the size of your training dataset.

For example, a reasonable value might be $0.2$ or $0.33$ for $20\%$ or $33\%$ of your training data held back for validation.

The example below demonstrates the use of using an automatic validation dataset on a small binary classification problem. For this we use the Pima Indians diabetes dataset.

In [5]:
import numpy as np 
# Fix random seed for reproducibility
np.random.seed(0)

url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv'
dataset = np.loadtxt(url, delimiter=",")
print(dataset.shape)
print(dataset[:5])
(768, 9)
[[6.000e+00 1.480e+02 7.200e+01 3.500e+01 0.000e+00 3.360e+01 6.270e-01
  5.000e+01 1.000e+00]
 [1.000e+00 8.500e+01 6.600e+01 2.900e+01 0.000e+00 2.660e+01 3.510e-01
  3.100e+01 0.000e+00]
 [8.000e+00 1.830e+02 6.400e+01 0.000e+00 0.000e+00 2.330e+01 6.720e-01
  3.200e+01 1.000e+00]
 [1.000e+00 8.900e+01 6.600e+01 2.300e+01 9.400e+01 2.810e+01 1.670e-01
  2.100e+01 0.000e+00]
 [0.000e+00 1.370e+02 4.000e+01 3.500e+01 1.680e+02 4.310e+01 2.288e+00
  3.300e+01 1.000e+00]]
In [6]:
# Split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
In [7]:
# Import Keras 
from keras.models import Sequential
from keras.layers import Dense

# Create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X, y, validation_split=0.33, epochs=150, batch_size=10, verbose=1)
Train on 514 samples, validate on 254 samples
Epoch 1/150
514/514 [==============================] - 0s 333us/step - loss: 4.1508 - accuracy: 0.6128 - val_loss: 2.6006 - val_accuracy: 0.5433
Epoch 2/150
514/514 [==============================] - 0s 93us/step - loss: 2.1185 - accuracy: 0.5992 - val_loss: 2.1559 - val_accuracy: 0.6417
Epoch 3/150
514/514 [==============================] - 0s 85us/step - loss: 1.9023 - accuracy: 0.6167 - val_loss: 1.8982 - val_accuracy: 0.6063
Epoch 4/150
514/514 [==============================] - 0s 83us/step - loss: 1.7776 - accuracy: 0.6498 - val_loss: 1.7068 - val_accuracy: 0.5630
Epoch 5/150
514/514 [==============================] - 0s 88us/step - loss: 1.7808 - accuracy: 0.6440 - val_loss: 1.5766 - val_accuracy: 0.6102
Epoch 6/150
514/514 [==============================] - 0s 84us/step - loss: 1.4177 - accuracy: 0.6595 - val_loss: 1.5715 - val_accuracy: 0.5748
Epoch 7/150
514/514 [==============================] - 0s 89us/step - loss: 1.3631 - accuracy: 0.6712 - val_loss: 1.4095 - val_accuracy: 0.5866
Epoch 8/150
514/514 [==============================] - 0s 82us/step - loss: 1.3713 - accuracy: 0.6459 - val_loss: 1.4020 - val_accuracy: 0.6102
Epoch 9/150
514/514 [==============================] - 0s 82us/step - loss: 1.3146 - accuracy: 0.6537 - val_loss: 1.3539 - val_accuracy: 0.6063
Epoch 10/150
514/514 [==============================] - 0s 84us/step - loss: 1.1882 - accuracy: 0.6401 - val_loss: 1.3399 - val_accuracy: 0.6417
Epoch 11/150
514/514 [==============================] - 0s 81us/step - loss: 1.2072 - accuracy: 0.6673 - val_loss: 1.2329 - val_accuracy: 0.6102
Epoch 12/150
514/514 [==============================] - 0s 86us/step - loss: 1.2856 - accuracy: 0.6634 - val_loss: 1.1624 - val_accuracy: 0.5984
Epoch 13/150
514/514 [==============================] - 0s 81us/step - loss: 1.0936 - accuracy: 0.6537 - val_loss: 1.0901 - val_accuracy: 0.6299
Epoch 14/150
514/514 [==============================] - 0s 92us/step - loss: 1.0784 - accuracy: 0.6537 - val_loss: 1.0506 - val_accuracy: 0.6575
Epoch 15/150
514/514 [==============================] - 0s 85us/step - loss: 1.0426 - accuracy: 0.6732 - val_loss: 1.0044 - val_accuracy: 0.6142
Epoch 16/150
514/514 [==============================] - 0s 82us/step - loss: 0.9867 - accuracy: 0.6634 - val_loss: 1.0024 - val_accuracy: 0.6220
Epoch 17/150
514/514 [==============================] - 0s 91us/step - loss: 0.9515 - accuracy: 0.6712 - val_loss: 1.0040 - val_accuracy: 0.6339
Epoch 18/150
514/514 [==============================] - 0s 88us/step - loss: 0.9629 - accuracy: 0.6498 - val_loss: 0.9234 - val_accuracy: 0.6102
Epoch 19/150
514/514 [==============================] - 0s 86us/step - loss: 0.9504 - accuracy: 0.6654 - val_loss: 0.8998 - val_accuracy: 0.6496
Epoch 20/150
514/514 [==============================] - 0s 83us/step - loss: 0.9251 - accuracy: 0.6498 - val_loss: 0.8834 - val_accuracy: 0.6260
Epoch 21/150
514/514 [==============================] - 0s 93us/step - loss: 0.8700 - accuracy: 0.6829 - val_loss: 0.8633 - val_accuracy: 0.6299
Epoch 22/150
514/514 [==============================] - 0s 86us/step - loss: 0.8566 - accuracy: 0.6829 - val_loss: 0.9140 - val_accuracy: 0.6457
Epoch 23/150
514/514 [==============================] - 0s 86us/step - loss: 0.8597 - accuracy: 0.6673 - val_loss: 0.8156 - val_accuracy: 0.6378
Epoch 24/150
514/514 [==============================] - 0s 83us/step - loss: 0.8525 - accuracy: 0.6673 - val_loss: 0.9459 - val_accuracy: 0.6654
Epoch 25/150
514/514 [==============================] - 0s 86us/step - loss: 0.8043 - accuracy: 0.6829 - val_loss: 0.8270 - val_accuracy: 0.6339
Epoch 26/150
514/514 [==============================] - 0s 86us/step - loss: 0.8146 - accuracy: 0.6868 - val_loss: 0.7993 - val_accuracy: 0.6614
Epoch 27/150
514/514 [==============================] - 0s 83us/step - loss: 0.7929 - accuracy: 0.6984 - val_loss: 0.7887 - val_accuracy: 0.6772
Epoch 28/150
514/514 [==============================] - 0s 88us/step - loss: 0.7909 - accuracy: 0.6848 - val_loss: 0.7707 - val_accuracy: 0.6339
Epoch 29/150
514/514 [==============================] - 0s 84us/step - loss: 0.7591 - accuracy: 0.6868 - val_loss: 0.7539 - val_accuracy: 0.6535
Epoch 30/150
514/514 [==============================] - 0s 83us/step - loss: 0.7664 - accuracy: 0.6868 - val_loss: 0.8193 - val_accuracy: 0.6260
Epoch 31/150
514/514 [==============================] - 0s 85us/step - loss: 0.7126 - accuracy: 0.6693 - val_loss: 0.7254 - val_accuracy: 0.6693
Epoch 32/150
514/514 [==============================] - 0s 88us/step - loss: 0.7290 - accuracy: 0.6790 - val_loss: 0.7214 - val_accuracy: 0.6496
Epoch 33/150
514/514 [==============================] - 0s 84us/step - loss: 0.7235 - accuracy: 0.6751 - val_loss: 0.7161 - val_accuracy: 0.6811
Epoch 34/150
514/514 [==============================] - 0s 84us/step - loss: 0.7158 - accuracy: 0.7043 - val_loss: 0.6992 - val_accuracy: 0.6535
Epoch 35/150
514/514 [==============================] - 0s 87us/step - loss: 0.6943 - accuracy: 0.6790 - val_loss: 0.7358 - val_accuracy: 0.6693
Epoch 36/150
514/514 [==============================] - 0s 84us/step - loss: 0.7023 - accuracy: 0.6848 - val_loss: 0.7362 - val_accuracy: 0.6772
Epoch 37/150
514/514 [==============================] - 0s 86us/step - loss: 0.7368 - accuracy: 0.6868 - val_loss: 0.6658 - val_accuracy: 0.6693
Epoch 38/150
514/514 [==============================] - 0s 83us/step - loss: 0.6586 - accuracy: 0.6946 - val_loss: 0.7826 - val_accuracy: 0.6339
Epoch 39/150
514/514 [==============================] - 0s 85us/step - loss: 0.7327 - accuracy: 0.6946 - val_loss: 0.6741 - val_accuracy: 0.6850
Epoch 40/150
514/514 [==============================] - 0s 86us/step - loss: 0.6969 - accuracy: 0.6770 - val_loss: 0.6611 - val_accuracy: 0.6772
Epoch 41/150
514/514 [==============================] - 0s 83us/step - loss: 0.7116 - accuracy: 0.7004 - val_loss: 0.8117 - val_accuracy: 0.6220
Epoch 42/150
514/514 [==============================] - 0s 91us/step - loss: 0.7372 - accuracy: 0.6887 - val_loss: 0.6463 - val_accuracy: 0.6969
Epoch 43/150
514/514 [==============================] - 0s 89us/step - loss: 0.7330 - accuracy: 0.6615 - val_loss: 0.8222 - val_accuracy: 0.5984
Epoch 44/150
514/514 [==============================] - 0s 90us/step - loss: 0.6461 - accuracy: 0.6946 - val_loss: 0.6858 - val_accuracy: 0.6811
Epoch 45/150
514/514 [==============================] - 0s 84us/step - loss: 0.6653 - accuracy: 0.6887 - val_loss: 0.7958 - val_accuracy: 0.6732
Epoch 46/150
514/514 [==============================] - 0s 85us/step - loss: 0.6639 - accuracy: 0.7023 - val_loss: 0.6520 - val_accuracy: 0.7008
Epoch 47/150
514/514 [==============================] - 0s 84us/step - loss: 0.6492 - accuracy: 0.6848 - val_loss: 0.6966 - val_accuracy: 0.6811
Epoch 48/150
514/514 [==============================] - 0s 84us/step - loss: 0.6342 - accuracy: 0.6984 - val_loss: 0.6233 - val_accuracy: 0.6890
Epoch 49/150
514/514 [==============================] - 0s 84us/step - loss: 0.7111 - accuracy: 0.6693 - val_loss: 0.8125 - val_accuracy: 0.5984
Epoch 50/150
514/514 [==============================] - 0s 84us/step - loss: 0.6432 - accuracy: 0.6946 - val_loss: 0.6203 - val_accuracy: 0.6929
Epoch 51/150
514/514 [==============================] - 0s 88us/step - loss: 0.6236 - accuracy: 0.7218 - val_loss: 0.6214 - val_accuracy: 0.6929
Epoch 52/150
514/514 [==============================] - 0s 83us/step - loss: 0.6036 - accuracy: 0.7004 - val_loss: 0.6205 - val_accuracy: 0.6929
Epoch 53/150
514/514 [==============================] - 0s 80us/step - loss: 0.6366 - accuracy: 0.7043 - val_loss: 0.6869 - val_accuracy: 0.6850
Epoch 54/150
514/514 [==============================] - 0s 88us/step - loss: 0.6126 - accuracy: 0.7023 - val_loss: 0.6198 - val_accuracy: 0.7008
Epoch 55/150
514/514 [==============================] - 0s 82us/step - loss: 0.6344 - accuracy: 0.6984 - val_loss: 0.5897 - val_accuracy: 0.7087
Epoch 56/150
514/514 [==============================] - 0s 87us/step - loss: 0.5934 - accuracy: 0.7179 - val_loss: 0.5846 - val_accuracy: 0.7008
Epoch 57/150
514/514 [==============================] - 0s 86us/step - loss: 0.6051 - accuracy: 0.7121 - val_loss: 0.6248 - val_accuracy: 0.6929
Epoch 58/150
514/514 [==============================] - 0s 89us/step - loss: 0.6406 - accuracy: 0.6965 - val_loss: 0.5737 - val_accuracy: 0.7008
Epoch 59/150
514/514 [==============================] - 0s 84us/step - loss: 0.5795 - accuracy: 0.7276 - val_loss: 0.7713 - val_accuracy: 0.5906
Epoch 60/150
514/514 [==============================] - 0s 85us/step - loss: 0.5939 - accuracy: 0.7062 - val_loss: 0.6160 - val_accuracy: 0.6969
Epoch 61/150
514/514 [==============================] - 0s 82us/step - loss: 0.5831 - accuracy: 0.7043 - val_loss: 0.7715 - val_accuracy: 0.5748
Epoch 62/150
514/514 [==============================] - 0s 85us/step - loss: 0.6268 - accuracy: 0.7023 - val_loss: 0.6012 - val_accuracy: 0.7008
Epoch 63/150
514/514 [==============================] - 0s 84us/step - loss: 0.5844 - accuracy: 0.7237 - val_loss: 0.5789 - val_accuracy: 0.6969
Epoch 64/150
514/514 [==============================] - 0s 80us/step - loss: 0.6290 - accuracy: 0.7101 - val_loss: 0.5687 - val_accuracy: 0.7126
Epoch 65/150
514/514 [==============================] - 0s 82us/step - loss: 0.6205 - accuracy: 0.6868 - val_loss: 0.6574 - val_accuracy: 0.6811
Epoch 66/150
514/514 [==============================] - 0s 83us/step - loss: 0.5557 - accuracy: 0.7160 - val_loss: 0.5744 - val_accuracy: 0.6850
Epoch 67/150
514/514 [==============================] - 0s 85us/step - loss: 0.5931 - accuracy: 0.7101 - val_loss: 0.6005 - val_accuracy: 0.7008
Epoch 68/150
514/514 [==============================] - 0s 84us/step - loss: 0.5802 - accuracy: 0.7198 - val_loss: 0.5654 - val_accuracy: 0.7126
Epoch 69/150
514/514 [==============================] - 0s 82us/step - loss: 0.6703 - accuracy: 0.6907 - val_loss: 0.6030 - val_accuracy: 0.6811
Epoch 70/150
514/514 [==============================] - 0s 82us/step - loss: 0.5621 - accuracy: 0.7257 - val_loss: 0.6601 - val_accuracy: 0.6457
Epoch 71/150
514/514 [==============================] - 0s 86us/step - loss: 0.5680 - accuracy: 0.7140 - val_loss: 0.5813 - val_accuracy: 0.7244
Epoch 72/150
514/514 [==============================] - 0s 83us/step - loss: 0.6175 - accuracy: 0.7101 - val_loss: 0.6132 - val_accuracy: 0.6850
Epoch 73/150
514/514 [==============================] - 0s 85us/step - loss: 0.5817 - accuracy: 0.7160 - val_loss: 0.5794 - val_accuracy: 0.7126
Epoch 74/150
514/514 [==============================] - 0s 83us/step - loss: 0.5594 - accuracy: 0.7160 - val_loss: 0.6139 - val_accuracy: 0.7008
Epoch 75/150
514/514 [==============================] - 0s 82us/step - loss: 0.5573 - accuracy: 0.7140 - val_loss: 0.5534 - val_accuracy: 0.7362
Epoch 76/150
514/514 [==============================] - 0s 84us/step - loss: 0.5644 - accuracy: 0.7198 - val_loss: 0.6916 - val_accuracy: 0.6811
Epoch 77/150
514/514 [==============================] - 0s 83us/step - loss: 0.5733 - accuracy: 0.7082 - val_loss: 0.5678 - val_accuracy: 0.7323
Epoch 78/150
514/514 [==============================] - 0s 84us/step - loss: 0.5767 - accuracy: 0.7315 - val_loss: 0.5762 - val_accuracy: 0.7283
Epoch 79/150
514/514 [==============================] - 0s 84us/step - loss: 0.5586 - accuracy: 0.7198 - val_loss: 0.5700 - val_accuracy: 0.6890
Epoch 80/150
514/514 [==============================] - 0s 81us/step - loss: 0.5840 - accuracy: 0.7218 - val_loss: 0.5669 - val_accuracy: 0.7126
Epoch 81/150
514/514 [==============================] - 0s 85us/step - loss: 0.5397 - accuracy: 0.7218 - val_loss: 0.6118 - val_accuracy: 0.6969
Epoch 82/150
514/514 [==============================] - 0s 86us/step - loss: 0.5670 - accuracy: 0.7257 - val_loss: 0.5868 - val_accuracy: 0.7323
Epoch 83/150
514/514 [==============================] - 0s 84us/step - loss: 0.5474 - accuracy: 0.7198 - val_loss: 0.5438 - val_accuracy: 0.7441
Epoch 84/150
514/514 [==============================] - 0s 83us/step - loss: 0.5731 - accuracy: 0.7237 - val_loss: 0.5508 - val_accuracy: 0.7323
Epoch 85/150
514/514 [==============================] - 0s 84us/step - loss: 0.5402 - accuracy: 0.7257 - val_loss: 0.6282 - val_accuracy: 0.7087
Epoch 86/150
514/514 [==============================] - 0s 84us/step - loss: 0.5918 - accuracy: 0.7101 - val_loss: 0.5443 - val_accuracy: 0.7087
Epoch 87/150
514/514 [==============================] - 0s 84us/step - loss: 0.5502 - accuracy: 0.7490 - val_loss: 0.6216 - val_accuracy: 0.6654
Epoch 88/150
514/514 [==============================] - 0s 85us/step - loss: 0.6272 - accuracy: 0.7062 - val_loss: 0.7011 - val_accuracy: 0.6024
Epoch 89/150
514/514 [==============================] - 0s 80us/step - loss: 0.5502 - accuracy: 0.7276 - val_loss: 0.5374 - val_accuracy: 0.7402
Epoch 90/150
514/514 [==============================] - 0s 85us/step - loss: 0.5437 - accuracy: 0.7335 - val_loss: 0.5701 - val_accuracy: 0.7323
Epoch 91/150
514/514 [==============================] - 0s 81us/step - loss: 0.5389 - accuracy: 0.7393 - val_loss: 0.5842 - val_accuracy: 0.6929
Epoch 92/150
514/514 [==============================] - 0s 83us/step - loss: 0.5659 - accuracy: 0.7218 - val_loss: 0.5635 - val_accuracy: 0.7283
Epoch 93/150
514/514 [==============================] - 0s 86us/step - loss: 0.5332 - accuracy: 0.7354 - val_loss: 0.5385 - val_accuracy: 0.7520
Epoch 94/150
514/514 [==============================] - 0s 82us/step - loss: 0.5506 - accuracy: 0.7296 - val_loss: 0.5417 - val_accuracy: 0.7323
Epoch 95/150
514/514 [==============================] - 0s 83us/step - loss: 0.5730 - accuracy: 0.7140 - val_loss: 0.5519 - val_accuracy: 0.7402
Epoch 96/150
514/514 [==============================] - 0s 82us/step - loss: 0.5461 - accuracy: 0.7257 - val_loss: 0.5511 - val_accuracy: 0.7402
Epoch 97/150
514/514 [==============================] - 0s 83us/step - loss: 0.5805 - accuracy: 0.7082 - val_loss: 0.5275 - val_accuracy: 0.7244
Epoch 98/150
514/514 [==============================] - 0s 84us/step - loss: 0.5184 - accuracy: 0.7393 - val_loss: 0.5390 - val_accuracy: 0.7362
Epoch 99/150
514/514 [==============================] - 0s 82us/step - loss: 0.5525 - accuracy: 0.7335 - val_loss: 0.5876 - val_accuracy: 0.6969
Epoch 100/150
514/514 [==============================] - 0s 83us/step - loss: 0.5336 - accuracy: 0.7432 - val_loss: 0.6461 - val_accuracy: 0.6850
Epoch 101/150
514/514 [==============================] - 0s 83us/step - loss: 0.5531 - accuracy: 0.7121 - val_loss: 0.5414 - val_accuracy: 0.7362
Epoch 102/150
514/514 [==============================] - 0s 90us/step - loss: 0.6106 - accuracy: 0.7140 - val_loss: 0.5584 - val_accuracy: 0.7441
Epoch 103/150
514/514 [==============================] - 0s 84us/step - loss: 0.5684 - accuracy: 0.6984 - val_loss: 0.6141 - val_accuracy: 0.6929
Epoch 104/150
514/514 [==============================] - 0s 82us/step - loss: 0.6881 - accuracy: 0.6790 - val_loss: 0.5443 - val_accuracy: 0.7126
Epoch 105/150
514/514 [==============================] - 0s 86us/step - loss: 0.5208 - accuracy: 0.7393 - val_loss: 0.5308 - val_accuracy: 0.7441
Epoch 106/150
514/514 [==============================] - 0s 83us/step - loss: 0.5177 - accuracy: 0.7490 - val_loss: 0.6264 - val_accuracy: 0.7047
Epoch 107/150
514/514 [==============================] - 0s 82us/step - loss: 0.5556 - accuracy: 0.7257 - val_loss: 0.5516 - val_accuracy: 0.7402
Epoch 108/150
514/514 [==============================] - 0s 84us/step - loss: 0.5433 - accuracy: 0.7257 - val_loss: 0.5441 - val_accuracy: 0.7205
Epoch 109/150
514/514 [==============================] - 0s 85us/step - loss: 0.5642 - accuracy: 0.7179 - val_loss: 0.5978 - val_accuracy: 0.7008
Epoch 110/150
514/514 [==============================] - 0s 82us/step - loss: 0.5626 - accuracy: 0.7140 - val_loss: 0.7750 - val_accuracy: 0.5591
Epoch 111/150
514/514 [==============================] - 0s 84us/step - loss: 0.6368 - accuracy: 0.7062 - val_loss: 0.5957 - val_accuracy: 0.6614
Epoch 112/150
514/514 [==============================] - 0s 82us/step - loss: 0.5483 - accuracy: 0.7218 - val_loss: 0.5270 - val_accuracy: 0.7559
Epoch 113/150
514/514 [==============================] - 0s 82us/step - loss: 0.6028 - accuracy: 0.7218 - val_loss: 0.5378 - val_accuracy: 0.7480
Epoch 114/150
514/514 [==============================] - 0s 85us/step - loss: 0.5195 - accuracy: 0.7374 - val_loss: 0.5336 - val_accuracy: 0.7283
Epoch 115/150
514/514 [==============================] - 0s 85us/step - loss: 0.5452 - accuracy: 0.7121 - val_loss: 0.5497 - val_accuracy: 0.7323
Epoch 116/150
514/514 [==============================] - 0s 84us/step - loss: 0.5649 - accuracy: 0.7257 - val_loss: 0.5611 - val_accuracy: 0.7126
Epoch 117/150
514/514 [==============================] - 0s 85us/step - loss: 0.5497 - accuracy: 0.7237 - val_loss: 0.5490 - val_accuracy: 0.7441
Epoch 118/150
514/514 [==============================] - 0s 83us/step - loss: 0.5235 - accuracy: 0.7374 - val_loss: 0.5375 - val_accuracy: 0.7244
Epoch 119/150
514/514 [==============================] - 0s 84us/step - loss: 0.5460 - accuracy: 0.7218 - val_loss: 0.5270 - val_accuracy: 0.7323
Epoch 120/150
514/514 [==============================] - 0s 87us/step - loss: 0.5193 - accuracy: 0.7529 - val_loss: 0.5472 - val_accuracy: 0.7323
Epoch 121/150
514/514 [==============================] - 0s 82us/step - loss: 0.5364 - accuracy: 0.7451 - val_loss: 0.5195 - val_accuracy: 0.7756
Epoch 122/150
514/514 [==============================] - 0s 86us/step - loss: 0.5674 - accuracy: 0.7354 - val_loss: 0.5306 - val_accuracy: 0.7559
Epoch 123/150
514/514 [==============================] - 0s 84us/step - loss: 0.5474 - accuracy: 0.7276 - val_loss: 0.5359 - val_accuracy: 0.7362
Epoch 124/150
514/514 [==============================] - 0s 84us/step - loss: 0.5243 - accuracy: 0.7315 - val_loss: 0.5190 - val_accuracy: 0.7441
Epoch 125/150
514/514 [==============================] - 0s 84us/step - loss: 0.5543 - accuracy: 0.7257 - val_loss: 0.6712 - val_accuracy: 0.6929
Epoch 126/150
514/514 [==============================] - 0s 84us/step - loss: 0.5482 - accuracy: 0.7354 - val_loss: 0.5161 - val_accuracy: 0.7598
Epoch 127/150
514/514 [==============================] - 0s 83us/step - loss: 0.5276 - accuracy: 0.7490 - val_loss: 0.5409 - val_accuracy: 0.7205
Epoch 128/150
514/514 [==============================] - 0s 88us/step - loss: 0.5642 - accuracy: 0.7121 - val_loss: 0.5744 - val_accuracy: 0.7047
Epoch 129/150
514/514 [==============================] - 0s 84us/step - loss: 0.5392 - accuracy: 0.7276 - val_loss: 0.5381 - val_accuracy: 0.7598
Epoch 130/150
514/514 [==============================] - 0s 83us/step - loss: 0.5256 - accuracy: 0.7374 - val_loss: 0.5607 - val_accuracy: 0.7480
Epoch 131/150
514/514 [==============================] - 0s 87us/step - loss: 0.4965 - accuracy: 0.7335 - val_loss: 0.5251 - val_accuracy: 0.7480
Epoch 132/150
514/514 [==============================] - 0s 84us/step - loss: 0.5252 - accuracy: 0.7432 - val_loss: 0.5435 - val_accuracy: 0.7520
Epoch 133/150
514/514 [==============================] - 0s 81us/step - loss: 0.5643 - accuracy: 0.7218 - val_loss: 0.5374 - val_accuracy: 0.7598
Epoch 134/150
514/514 [==============================] - 0s 87us/step - loss: 0.5850 - accuracy: 0.7179 - val_loss: 0.6382 - val_accuracy: 0.6339
Epoch 135/150
514/514 [==============================] - 0s 87us/step - loss: 0.5253 - accuracy: 0.7354 - val_loss: 0.5618 - val_accuracy: 0.7244
Epoch 136/150
514/514 [==============================] - 0s 84us/step - loss: 0.5260 - accuracy: 0.7412 - val_loss: 0.5629 - val_accuracy: 0.7323
Epoch 137/150
514/514 [==============================] - 0s 82us/step - loss: 0.5230 - accuracy: 0.7257 - val_loss: 0.5057 - val_accuracy: 0.7638
Epoch 138/150
514/514 [==============================] - 0s 85us/step - loss: 0.5043 - accuracy: 0.7529 - val_loss: 0.5233 - val_accuracy: 0.7480
Epoch 139/150
514/514 [==============================] - 0s 83us/step - loss: 0.5286 - accuracy: 0.7296 - val_loss: 0.5392 - val_accuracy: 0.7717
Epoch 140/150
514/514 [==============================] - 0s 86us/step - loss: 0.5440 - accuracy: 0.7218 - val_loss: 0.6516 - val_accuracy: 0.6614
Epoch 141/150
514/514 [==============================] - 0s 86us/step - loss: 0.5220 - accuracy: 0.7354 - val_loss: 0.4979 - val_accuracy: 0.7677
Epoch 142/150
514/514 [==============================] - 0s 83us/step - loss: 0.5256 - accuracy: 0.7276 - val_loss: 0.5258 - val_accuracy: 0.7677
Epoch 143/150
514/514 [==============================] - 0s 86us/step - loss: 0.5141 - accuracy: 0.7412 - val_loss: 0.5783 - val_accuracy: 0.7244
Epoch 144/150
514/514 [==============================] - 0s 84us/step - loss: 0.4995 - accuracy: 0.7490 - val_loss: 0.5772 - val_accuracy: 0.7165
Epoch 145/150
514/514 [==============================] - 0s 88us/step - loss: 0.5057 - accuracy: 0.7471 - val_loss: 0.5264 - val_accuracy: 0.7598
Epoch 146/150
514/514 [==============================] - 0s 82us/step - loss: 0.5004 - accuracy: 0.7490 - val_loss: 0.4992 - val_accuracy: 0.7717
Epoch 147/150
514/514 [==============================] - 0s 86us/step - loss: 0.5464 - accuracy: 0.7160 - val_loss: 0.5475 - val_accuracy: 0.7165
Epoch 148/150
514/514 [==============================] - 0s 85us/step - loss: 0.5461 - accuracy: 0.7296 - val_loss: 0.5259 - val_accuracy: 0.7559
Epoch 149/150
514/514 [==============================] - 0s 85us/step - loss: 0.5965 - accuracy: 0.7393 - val_loss: 0.5159 - val_accuracy: 0.7520
Epoch 150/150
514/514 [==============================] - 0s 87us/step - loss: 0.5099 - accuracy: 0.7393 - val_loss: 0.5617 - val_accuracy: 0.7441
Out[7]:
<keras.callbacks.callbacks.History at 0x7f2276c38f28>

You can see that the verbose output on each epoch shows the loss and accuracy on both the training dataset and the validation dataset.

Use a Manual Verification Dataset

Keras also allows you to manually specify the dataset to use for validation during training.

In this example we use the train_test_split() function from the Python scikit-learn to separate our data into a training and test dataset. We use $67\%$ for training and the remaining $33\%$ of the data for validation.

The validation dataset can be specified to the fit() function in Keras by the validation_data argument. It takes a tuple of the input and output datasets.

In [9]:
from sklearn.model_selection import train_test_split

# Split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=150, batch_size=10, verbose=1)
Train on 514 samples, validate on 254 samples
Epoch 1/150
514/514 [==============================] - 0s 247us/step - loss: 0.5544 - accuracy: 0.7296 - val_loss: 0.4789 - val_accuracy: 0.7835
Epoch 2/150
514/514 [==============================] - 0s 93us/step - loss: 0.5184 - accuracy: 0.7568 - val_loss: 0.4896 - val_accuracy: 0.7559
Epoch 3/150
514/514 [==============================] - 0s 90us/step - loss: 0.5481 - accuracy: 0.7179 - val_loss: 0.6088 - val_accuracy: 0.6929
Epoch 4/150
514/514 [==============================] - 0s 88us/step - loss: 0.5338 - accuracy: 0.7510 - val_loss: 0.5642 - val_accuracy: 0.7205
Epoch 5/150
514/514 [==============================] - 0s 87us/step - loss: 0.5301 - accuracy: 0.7296 - val_loss: 0.5021 - val_accuracy: 0.7559
Epoch 6/150
514/514 [==============================] - 0s 85us/step - loss: 0.5175 - accuracy: 0.7354 - val_loss: 0.5106 - val_accuracy: 0.7677
Epoch 7/150
514/514 [==============================] - 0s 84us/step - loss: 0.5660 - accuracy: 0.7082 - val_loss: 0.5050 - val_accuracy: 0.7559
Epoch 8/150
514/514 [==============================] - 0s 85us/step - loss: 0.5203 - accuracy: 0.7529 - val_loss: 0.6330 - val_accuracy: 0.6693
Epoch 9/150
514/514 [==============================] - 0s 87us/step - loss: 0.5489 - accuracy: 0.7296 - val_loss: 0.5466 - val_accuracy: 0.7362
Epoch 10/150
514/514 [==============================] - 0s 83us/step - loss: 0.5114 - accuracy: 0.7510 - val_loss: 0.5824 - val_accuracy: 0.7362
Epoch 11/150
514/514 [==============================] - 0s 86us/step - loss: 0.5078 - accuracy: 0.7354 - val_loss: 0.4968 - val_accuracy: 0.7835
Epoch 12/150
514/514 [==============================] - 0s 84us/step - loss: 0.5016 - accuracy: 0.7393 - val_loss: 0.5304 - val_accuracy: 0.7402
Epoch 13/150
514/514 [==============================] - 0s 85us/step - loss: 0.5111 - accuracy: 0.7510 - val_loss: 0.5764 - val_accuracy: 0.7362
Epoch 14/150
514/514 [==============================] - 0s 92us/step - loss: 0.5078 - accuracy: 0.7510 - val_loss: 0.5224 - val_accuracy: 0.7362
Epoch 15/150
514/514 [==============================] - 0s 88us/step - loss: 0.5617 - accuracy: 0.7198 - val_loss: 0.6865 - val_accuracy: 0.6890
Epoch 16/150
514/514 [==============================] - 0s 84us/step - loss: 0.6876 - accuracy: 0.7062 - val_loss: 0.5321 - val_accuracy: 0.7480
Epoch 17/150
514/514 [==============================] - 0s 91us/step - loss: 0.5570 - accuracy: 0.7257 - val_loss: 0.5117 - val_accuracy: 0.7480
Epoch 18/150
514/514 [==============================] - 0s 87us/step - loss: 0.5159 - accuracy: 0.7335 - val_loss: 0.4981 - val_accuracy: 0.7717
Epoch 19/150
514/514 [==============================] - 0s 88us/step - loss: 0.5038 - accuracy: 0.7568 - val_loss: 0.5368 - val_accuracy: 0.7283
Epoch 20/150
514/514 [==============================] - 0s 89us/step - loss: 0.4972 - accuracy: 0.7588 - val_loss: 0.5284 - val_accuracy: 0.7323
Epoch 21/150
514/514 [==============================] - 0s 82us/step - loss: 0.4930 - accuracy: 0.7568 - val_loss: 0.5608 - val_accuracy: 0.7087
Epoch 22/150
514/514 [==============================] - 0s 91us/step - loss: 0.5183 - accuracy: 0.7568 - val_loss: 0.6297 - val_accuracy: 0.7008
Epoch 23/150
514/514 [==============================] - 0s 90us/step - loss: 0.5121 - accuracy: 0.7412 - val_loss: 0.5043 - val_accuracy: 0.7402
Epoch 24/150
514/514 [==============================] - 0s 88us/step - loss: 0.4926 - accuracy: 0.7704 - val_loss: 0.5197 - val_accuracy: 0.7520
Epoch 25/150
514/514 [==============================] - 0s 86us/step - loss: 0.5085 - accuracy: 0.7529 - val_loss: 0.5469 - val_accuracy: 0.7244
Epoch 26/150
514/514 [==============================] - 0s 91us/step - loss: 0.5215 - accuracy: 0.7393 - val_loss: 0.5886 - val_accuracy: 0.7087
Epoch 27/150
514/514 [==============================] - 0s 87us/step - loss: 0.5231 - accuracy: 0.7432 - val_loss: 0.5483 - val_accuracy: 0.7480
Epoch 28/150
514/514 [==============================] - 0s 86us/step - loss: 0.4869 - accuracy: 0.7588 - val_loss: 0.5269 - val_accuracy: 0.7677
Epoch 29/150
514/514 [==============================] - 0s 84us/step - loss: 0.5199 - accuracy: 0.7315 - val_loss: 0.6160 - val_accuracy: 0.7047
Epoch 30/150
514/514 [==============================] - 0s 86us/step - loss: 0.5015 - accuracy: 0.7354 - val_loss: 0.5029 - val_accuracy: 0.7874
Epoch 31/150
514/514 [==============================] - 0s 87us/step - loss: 0.5271 - accuracy: 0.7296 - val_loss: 0.5069 - val_accuracy: 0.7598
Epoch 32/150
514/514 [==============================] - 0s 127us/step - loss: 0.5139 - accuracy: 0.7432 - val_loss: 0.5241 - val_accuracy: 0.7795
Epoch 33/150
514/514 [==============================] - 0s 85us/step - loss: 0.5031 - accuracy: 0.7607 - val_loss: 0.5114 - val_accuracy: 0.7520
Epoch 34/150
514/514 [==============================] - 0s 86us/step - loss: 0.4858 - accuracy: 0.7432 - val_loss: 0.5486 - val_accuracy: 0.7559
Epoch 35/150
514/514 [==============================] - 0s 91us/step - loss: 0.5021 - accuracy: 0.7412 - val_loss: 0.5298 - val_accuracy: 0.7283
Epoch 36/150
514/514 [==============================] - 0s 87us/step - loss: 0.5049 - accuracy: 0.7665 - val_loss: 0.5562 - val_accuracy: 0.7402
Epoch 37/150
514/514 [==============================] - 0s 86us/step - loss: 0.5327 - accuracy: 0.7510 - val_loss: 0.5240 - val_accuracy: 0.7362
Epoch 38/150
514/514 [==============================] - 0s 87us/step - loss: 0.5434 - accuracy: 0.7257 - val_loss: 0.5416 - val_accuracy: 0.7717
Epoch 39/150
514/514 [==============================] - 0s 84us/step - loss: 0.5003 - accuracy: 0.7724 - val_loss: 0.5715 - val_accuracy: 0.7362
Epoch 40/150
514/514 [==============================] - 0s 87us/step - loss: 0.4993 - accuracy: 0.7568 - val_loss: 0.5262 - val_accuracy: 0.7559
Epoch 41/150
514/514 [==============================] - 0s 84us/step - loss: 0.5215 - accuracy: 0.7510 - val_loss: 0.5037 - val_accuracy: 0.7717
Epoch 42/150
514/514 [==============================] - 0s 85us/step - loss: 0.5195 - accuracy: 0.7412 - val_loss: 0.5308 - val_accuracy: 0.7323
Epoch 43/150
514/514 [==============================] - 0s 85us/step - loss: 0.5152 - accuracy: 0.7471 - val_loss: 0.6041 - val_accuracy: 0.7126
Epoch 44/150
514/514 [==============================] - 0s 86us/step - loss: 0.4921 - accuracy: 0.7743 - val_loss: 0.5166 - val_accuracy: 0.7520
Epoch 45/150
514/514 [==============================] - 0s 87us/step - loss: 0.4841 - accuracy: 0.7607 - val_loss: 0.5072 - val_accuracy: 0.7756
Epoch 46/150
514/514 [==============================] - 0s 84us/step - loss: 0.4940 - accuracy: 0.7490 - val_loss: 0.6276 - val_accuracy: 0.6890
Epoch 47/150
514/514 [==============================] - 0s 86us/step - loss: 0.5041 - accuracy: 0.7490 - val_loss: 0.7005 - val_accuracy: 0.6811
Epoch 48/150
514/514 [==============================] - 0s 87us/step - loss: 0.5159 - accuracy: 0.7510 - val_loss: 0.5504 - val_accuracy: 0.7677
Epoch 49/150
514/514 [==============================] - 0s 82us/step - loss: 0.5147 - accuracy: 0.7451 - val_loss: 0.5350 - val_accuracy: 0.7598
Epoch 50/150
514/514 [==============================] - 0s 87us/step - loss: 0.5121 - accuracy: 0.7685 - val_loss: 0.5376 - val_accuracy: 0.7598
Epoch 51/150
514/514 [==============================] - 0s 85us/step - loss: 0.4863 - accuracy: 0.7529 - val_loss: 0.5212 - val_accuracy: 0.7598
Epoch 52/150
514/514 [==============================] - 0s 87us/step - loss: 0.5048 - accuracy: 0.7393 - val_loss: 0.5207 - val_accuracy: 0.7598
Epoch 53/150
514/514 [==============================] - 0s 82us/step - loss: 0.5258 - accuracy: 0.7432 - val_loss: 0.5652 - val_accuracy: 0.7598
Epoch 54/150
514/514 [==============================] - 0s 82us/step - loss: 0.5045 - accuracy: 0.7743 - val_loss: 0.5454 - val_accuracy: 0.7677
Epoch 55/150
514/514 [==============================] - 0s 91us/step - loss: 0.5207 - accuracy: 0.7276 - val_loss: 0.5482 - val_accuracy: 0.7756
Epoch 56/150
514/514 [==============================] - 0s 82us/step - loss: 0.4995 - accuracy: 0.7743 - val_loss: 0.5271 - val_accuracy: 0.7717
Epoch 57/150
514/514 [==============================] - 0s 86us/step - loss: 0.4821 - accuracy: 0.7588 - val_loss: 0.5321 - val_accuracy: 0.7717
Epoch 58/150
514/514 [==============================] - 0s 85us/step - loss: 0.4836 - accuracy: 0.7724 - val_loss: 0.6049 - val_accuracy: 0.7126
Epoch 59/150
514/514 [==============================] - 0s 83us/step - loss: 0.5975 - accuracy: 0.7412 - val_loss: 0.5667 - val_accuracy: 0.7362
Epoch 60/150
514/514 [==============================] - 0s 85us/step - loss: 0.5411 - accuracy: 0.7432 - val_loss: 0.6914 - val_accuracy: 0.7047
Epoch 61/150
514/514 [==============================] - 0s 86us/step - loss: 0.4808 - accuracy: 0.7685 - val_loss: 0.5299 - val_accuracy: 0.7520
Epoch 62/150
514/514 [==============================] - 0s 83us/step - loss: 0.4939 - accuracy: 0.7471 - val_loss: 0.5667 - val_accuracy: 0.7402
Epoch 63/150
514/514 [==============================] - 0s 87us/step - loss: 0.4947 - accuracy: 0.7646 - val_loss: 0.5398 - val_accuracy: 0.7677
Epoch 64/150
514/514 [==============================] - 0s 81us/step - loss: 0.5166 - accuracy: 0.7685 - val_loss: 0.5574 - val_accuracy: 0.7638
Epoch 65/150
514/514 [==============================] - 0s 84us/step - loss: 0.4838 - accuracy: 0.7607 - val_loss: 0.5844 - val_accuracy: 0.7283
Epoch 66/150
514/514 [==============================] - 0s 85us/step - loss: 0.4680 - accuracy: 0.7626 - val_loss: 0.6047 - val_accuracy: 0.7047
Epoch 67/150
514/514 [==============================] - 0s 90us/step - loss: 0.4896 - accuracy: 0.7685 - val_loss: 0.5934 - val_accuracy: 0.7480
Epoch 68/150
514/514 [==============================] - 0s 80us/step - loss: 0.4785 - accuracy: 0.7568 - val_loss: 0.5739 - val_accuracy: 0.7402
Epoch 69/150
514/514 [==============================] - 0s 90us/step - loss: 0.5297 - accuracy: 0.7529 - val_loss: 0.5264 - val_accuracy: 0.7992
Epoch 70/150
514/514 [==============================] - 0s 83us/step - loss: 0.4838 - accuracy: 0.7704 - val_loss: 0.6524 - val_accuracy: 0.6772
Epoch 71/150
514/514 [==============================] - 0s 85us/step - loss: 0.5032 - accuracy: 0.7354 - val_loss: 0.6558 - val_accuracy: 0.7205
Epoch 72/150
514/514 [==============================] - 0s 84us/step - loss: 0.5470 - accuracy: 0.7471 - val_loss: 0.5432 - val_accuracy: 0.7402
Epoch 73/150
514/514 [==============================] - 0s 81us/step - loss: 0.5271 - accuracy: 0.7393 - val_loss: 0.5421 - val_accuracy: 0.7598
Epoch 74/150
514/514 [==============================] - 0s 83us/step - loss: 0.4869 - accuracy: 0.7665 - val_loss: 0.5535 - val_accuracy: 0.7441
Epoch 75/150
514/514 [==============================] - 0s 83us/step - loss: 0.4730 - accuracy: 0.7743 - val_loss: 0.5427 - val_accuracy: 0.7559
Epoch 76/150
514/514 [==============================] - 0s 81us/step - loss: 0.4868 - accuracy: 0.7568 - val_loss: 0.6515 - val_accuracy: 0.7047
Epoch 77/150
514/514 [==============================] - 0s 87us/step - loss: 0.5087 - accuracy: 0.7782 - val_loss: 0.6406 - val_accuracy: 0.7244
Epoch 78/150
514/514 [==============================] - 0s 84us/step - loss: 0.4857 - accuracy: 0.7588 - val_loss: 0.6378 - val_accuracy: 0.7165
Epoch 79/150
514/514 [==============================] - 0s 81us/step - loss: 0.4942 - accuracy: 0.7646 - val_loss: 0.5366 - val_accuracy: 0.7638
Epoch 80/150
514/514 [==============================] - 0s 86us/step - loss: 0.4917 - accuracy: 0.7763 - val_loss: 0.5524 - val_accuracy: 0.7756
Epoch 81/150
514/514 [==============================] - 0s 84us/step - loss: 0.4869 - accuracy: 0.7685 - val_loss: 0.5352 - val_accuracy: 0.7756
Epoch 82/150
514/514 [==============================] - 0s 84us/step - loss: 0.4727 - accuracy: 0.7802 - val_loss: 0.5333 - val_accuracy: 0.7717
Epoch 83/150
514/514 [==============================] - 0s 83us/step - loss: 0.4956 - accuracy: 0.7665 - val_loss: 0.5455 - val_accuracy: 0.7638
Epoch 84/150
514/514 [==============================] - 0s 84us/step - loss: 0.4671 - accuracy: 0.7646 - val_loss: 0.5437 - val_accuracy: 0.7480
Epoch 85/150
514/514 [==============================] - 0s 90us/step - loss: 0.4975 - accuracy: 0.7490 - val_loss: 0.5264 - val_accuracy: 0.7795
Epoch 86/150
514/514 [==============================] - 0s 84us/step - loss: 0.4685 - accuracy: 0.7685 - val_loss: 0.5389 - val_accuracy: 0.7874
Epoch 87/150
514/514 [==============================] - 0s 83us/step - loss: 0.4889 - accuracy: 0.7588 - val_loss: 0.5450 - val_accuracy: 0.7677
Epoch 88/150
514/514 [==============================] - 0s 85us/step - loss: 0.4774 - accuracy: 0.7665 - val_loss: 0.5699 - val_accuracy: 0.7717
Epoch 89/150
514/514 [==============================] - 0s 86us/step - loss: 0.5005 - accuracy: 0.7471 - val_loss: 0.5341 - val_accuracy: 0.7598
Epoch 90/150
514/514 [==============================] - 0s 81us/step - loss: 0.4883 - accuracy: 0.7665 - val_loss: 0.5991 - val_accuracy: 0.7126
Epoch 91/150
514/514 [==============================] - 0s 83us/step - loss: 0.5011 - accuracy: 0.7529 - val_loss: 0.5548 - val_accuracy: 0.7835
Epoch 92/150
514/514 [==============================] - 0s 85us/step - loss: 0.4730 - accuracy: 0.7626 - val_loss: 0.5451 - val_accuracy: 0.7717
Epoch 93/150
514/514 [==============================] - 0s 82us/step - loss: 0.4916 - accuracy: 0.7588 - val_loss: 0.6453 - val_accuracy: 0.7087
Epoch 94/150
514/514 [==============================] - 0s 89us/step - loss: 0.4709 - accuracy: 0.7860 - val_loss: 0.5844 - val_accuracy: 0.7520
Epoch 95/150
514/514 [==============================] - 0s 84us/step - loss: 0.4807 - accuracy: 0.7607 - val_loss: 0.5579 - val_accuracy: 0.7559
Epoch 96/150
514/514 [==============================] - 0s 83us/step - loss: 0.4716 - accuracy: 0.7821 - val_loss: 0.6159 - val_accuracy: 0.7165
Epoch 97/150
514/514 [==============================] - 0s 87us/step - loss: 0.5118 - accuracy: 0.7549 - val_loss: 0.5654 - val_accuracy: 0.7717
Epoch 98/150
514/514 [==============================] - 0s 86us/step - loss: 0.4968 - accuracy: 0.7432 - val_loss: 0.6624 - val_accuracy: 0.6772
Epoch 99/150
514/514 [==============================] - 0s 82us/step - loss: 0.4797 - accuracy: 0.7510 - val_loss: 0.5920 - val_accuracy: 0.7559
Epoch 100/150
514/514 [==============================] - 0s 89us/step - loss: 0.4651 - accuracy: 0.7665 - val_loss: 0.6583 - val_accuracy: 0.7087
Epoch 101/150
514/514 [==============================] - 0s 82us/step - loss: 0.5264 - accuracy: 0.7510 - val_loss: 0.6228 - val_accuracy: 0.6850
Epoch 102/150
514/514 [==============================] - 0s 84us/step - loss: 0.4899 - accuracy: 0.7588 - val_loss: 0.5737 - val_accuracy: 0.7795
Epoch 103/150
514/514 [==============================] - 0s 84us/step - loss: 0.4627 - accuracy: 0.7724 - val_loss: 0.5349 - val_accuracy: 0.7520
Epoch 104/150
514/514 [==============================] - 0s 85us/step - loss: 0.4673 - accuracy: 0.7821 - val_loss: 0.6017 - val_accuracy: 0.7677
Epoch 105/150
514/514 [==============================] - 0s 83us/step - loss: 0.4656 - accuracy: 0.7802 - val_loss: 0.7283 - val_accuracy: 0.6929
Epoch 106/150
514/514 [==============================] - 0s 83us/step - loss: 0.5066 - accuracy: 0.7451 - val_loss: 0.5527 - val_accuracy: 0.7953
Epoch 107/150
514/514 [==============================] - 0s 88us/step - loss: 0.4841 - accuracy: 0.7743 - val_loss: 0.6410 - val_accuracy: 0.7126
Epoch 108/150
514/514 [==============================] - 0s 85us/step - loss: 0.4963 - accuracy: 0.7607 - val_loss: 0.5389 - val_accuracy: 0.7638
Epoch 109/150
514/514 [==============================] - 0s 84us/step - loss: 0.4997 - accuracy: 0.7412 - val_loss: 0.5512 - val_accuracy: 0.7756
Epoch 110/150
514/514 [==============================] - 0s 84us/step - loss: 0.4779 - accuracy: 0.7704 - val_loss: 0.5973 - val_accuracy: 0.7559
Epoch 111/150
514/514 [==============================] - 0s 80us/step - loss: 0.4576 - accuracy: 0.7802 - val_loss: 0.5544 - val_accuracy: 0.7835
Epoch 112/150
514/514 [==============================] - 0s 83us/step - loss: 0.4560 - accuracy: 0.7704 - val_loss: 0.5552 - val_accuracy: 0.7756
Epoch 113/150
514/514 [==============================] - 0s 93us/step - loss: 0.4981 - accuracy: 0.7588 - val_loss: 0.5511 - val_accuracy: 0.7913
Epoch 114/150
514/514 [==============================] - 0s 83us/step - loss: 0.4749 - accuracy: 0.7704 - val_loss: 0.5428 - val_accuracy: 0.7559
Epoch 115/150
514/514 [==============================] - 0s 82us/step - loss: 0.4652 - accuracy: 0.7782 - val_loss: 0.5614 - val_accuracy: 0.7795
Epoch 116/150
514/514 [==============================] - 0s 84us/step - loss: 0.5166 - accuracy: 0.7393 - val_loss: 0.5933 - val_accuracy: 0.7717
Epoch 117/150
514/514 [==============================] - 0s 90us/step - loss: 0.5090 - accuracy: 0.7568 - val_loss: 0.6399 - val_accuracy: 0.7126
Epoch 118/150
514/514 [==============================] - 0s 86us/step - loss: 0.4792 - accuracy: 0.7607 - val_loss: 0.5661 - val_accuracy: 0.7874
Epoch 119/150
514/514 [==============================] - 0s 84us/step - loss: 0.4760 - accuracy: 0.7704 - val_loss: 0.5853 - val_accuracy: 0.7717
Epoch 120/150
514/514 [==============================] - 0s 84us/step - loss: 0.4744 - accuracy: 0.7510 - val_loss: 0.5679 - val_accuracy: 0.7598
Epoch 121/150
514/514 [==============================] - 0s 82us/step - loss: 0.4900 - accuracy: 0.7588 - val_loss: 0.5482 - val_accuracy: 0.7598
Epoch 122/150
514/514 [==============================] - 0s 85us/step - loss: 0.4587 - accuracy: 0.7821 - val_loss: 0.6350 - val_accuracy: 0.7126
Epoch 123/150
514/514 [==============================] - 0s 83us/step - loss: 0.4835 - accuracy: 0.7646 - val_loss: 0.6168 - val_accuracy: 0.7323
Epoch 124/150
514/514 [==============================] - 0s 84us/step - loss: 0.4756 - accuracy: 0.7549 - val_loss: 0.5484 - val_accuracy: 0.7795
Epoch 125/150
514/514 [==============================] - 0s 86us/step - loss: 0.4588 - accuracy: 0.7840 - val_loss: 0.5654 - val_accuracy: 0.7480
Epoch 126/150
514/514 [==============================] - 0s 86us/step - loss: 0.4784 - accuracy: 0.7743 - val_loss: 0.5687 - val_accuracy: 0.7677
Epoch 127/150
514/514 [==============================] - 0s 87us/step - loss: 0.4823 - accuracy: 0.7665 - val_loss: 0.5516 - val_accuracy: 0.7913
Epoch 128/150
514/514 [==============================] - 0s 84us/step - loss: 0.4629 - accuracy: 0.7957 - val_loss: 0.5456 - val_accuracy: 0.7795
Epoch 129/150
514/514 [==============================] - 0s 84us/step - loss: 0.4536 - accuracy: 0.7821 - val_loss: 0.5622 - val_accuracy: 0.7835
Epoch 130/150
514/514 [==============================] - 0s 84us/step - loss: 0.4619 - accuracy: 0.7782 - val_loss: 0.6583 - val_accuracy: 0.7165
Epoch 131/150
514/514 [==============================] - 0s 94us/step - loss: 0.4775 - accuracy: 0.7665 - val_loss: 0.5590 - val_accuracy: 0.7835
Epoch 132/150
514/514 [==============================] - 0s 92us/step - loss: 0.4851 - accuracy: 0.7646 - val_loss: 0.5746 - val_accuracy: 0.7717
Epoch 133/150
514/514 [==============================] - 0s 102us/step - loss: 0.5051 - accuracy: 0.7685 - val_loss: 0.5513 - val_accuracy: 0.7756
Epoch 134/150
514/514 [==============================] - 0s 101us/step - loss: 0.4767 - accuracy: 0.7665 - val_loss: 0.5656 - val_accuracy: 0.7441
Epoch 135/150
514/514 [==============================] - 0s 95us/step - loss: 0.4915 - accuracy: 0.7607 - val_loss: 0.5767 - val_accuracy: 0.7835
Epoch 136/150
514/514 [==============================] - 0s 82us/step - loss: 0.4791 - accuracy: 0.7490 - val_loss: 0.6334 - val_accuracy: 0.7126
Epoch 137/150
514/514 [==============================] - 0s 83us/step - loss: 0.5259 - accuracy: 0.7451 - val_loss: 0.6134 - val_accuracy: 0.7362
Epoch 138/150
514/514 [==============================] - 0s 85us/step - loss: 0.5077 - accuracy: 0.7568 - val_loss: 0.6099 - val_accuracy: 0.7441
Epoch 139/150
514/514 [==============================] - 0s 83us/step - loss: 0.4426 - accuracy: 0.7879 - val_loss: 0.6117 - val_accuracy: 0.7165
Epoch 140/150
514/514 [==============================] - 0s 84us/step - loss: 0.4680 - accuracy: 0.7802 - val_loss: 0.5698 - val_accuracy: 0.7638
Epoch 141/150
514/514 [==============================] - 0s 82us/step - loss: 0.4466 - accuracy: 0.7743 - val_loss: 0.6924 - val_accuracy: 0.7047
Epoch 142/150
514/514 [==============================] - 0s 85us/step - loss: 0.5059 - accuracy: 0.7510 - val_loss: 0.6117 - val_accuracy: 0.7402
Epoch 143/150
514/514 [==============================] - 0s 83us/step - loss: 0.4822 - accuracy: 0.7529 - val_loss: 0.6664 - val_accuracy: 0.7126
Epoch 144/150
514/514 [==============================] - 0s 87us/step - loss: 0.4672 - accuracy: 0.7918 - val_loss: 0.5549 - val_accuracy: 0.7638
Epoch 145/150
514/514 [==============================] - 0s 87us/step - loss: 0.4434 - accuracy: 0.7899 - val_loss: 0.6970 - val_accuracy: 0.7362
Epoch 146/150
514/514 [==============================] - 0s 85us/step - loss: 0.4812 - accuracy: 0.7840 - val_loss: 0.5584 - val_accuracy: 0.7795
Epoch 147/150
514/514 [==============================] - 0s 83us/step - loss: 0.4713 - accuracy: 0.7704 - val_loss: 0.5789 - val_accuracy: 0.7835
Epoch 148/150
514/514 [==============================] - 0s 87us/step - loss: 0.4850 - accuracy: 0.7743 - val_loss: 0.6186 - val_accuracy: 0.7402
Epoch 149/150
514/514 [==============================] - 0s 83us/step - loss: 0.4625 - accuracy: 0.7763 - val_loss: 0.6232 - val_accuracy: 0.7441
Epoch 150/150
514/514 [==============================] - 0s 82us/step - loss: 0.4588 - accuracy: 0.7802 - val_loss: 0.5964 - val_accuracy: 0.7795
Out[9]:
<keras.callbacks.callbacks.History at 0x7f21ed61b630>

Manual k-Fold Cross Validation

Alt text that describes the graphic

The gold standard for machine learning model evaluation is k-fold cross validation. It provides a robust estimate of the performance of a model on unseen data. It does this by splitting the training dataset into $k$ subsets and takes turns training models on all subsets except one which is held out, and evaluating model performance on the held out validation dataset. The process is repeated until all subsets are given an opportunity to be the held out validation set. The performance measure is then averaged across all models that are created.

Cross validation is often not used for evaluating deep learning models because of the larger computational expense. For example k-fold cross validation is often used with $5$ or $12$ folds. As such, $5$ or $12$ models must be constructed and evaluated, greatly adding to the evaluation time of a model.

Nevertheless, when the problem is small enough or if you have sufficient compute resources, k-fold cross validation can give you a less biased estimate of the performance of your model.

In the example below we use the StratifiedKFold class from the scikit-learn to split up the training dataset into 10 folds. The folds are stratified, meaning that the model attempts to balance the number of instances of each class in each fold.

The example creates and evaluates $10$ models using the $10$ splits of the data and collects all of the scores. The verbose output for each epoch is turned off by passing verbose=0 to the fit() and evaluate() functions on the model.

The performance is printed and stored for each model. The average and standard deviation of the model performance is then printed at the end of the run to provide a robust estimate of model accuracy.

In [16]:
from sklearn.model_selection import StratifiedKFold

# define 10-fold cross validation test harness
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=2)

cnt = 0
cvscores = []
for train, test in kfold.split(X, y):
    cnt+=1
    
    # create model
    model = Sequential()
    model.add(Dense(12, input_dim=8, activation='relu'))
    model.add(Dense(8, activation='relu'))
    model.add(Dense(1, activation='sigmoid'))
    
    # Compile model
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
    
    # Fit the model
    model.fit(X[train], y[train], epochs=150, batch_size=10, verbose=0)
    
    # evaluate the model
    scores = model.evaluate(X[test], y[test], verbose=0)
    print(f'Fold {cnt} {model.metrics_names[1]}: {round(scores[1]*100,3)}%')
    cvscores.append(scores[1] * 100)
    
print(f'\n{model.metrics_names[1]}: {round(np.mean(cvscores),3)}%, (+/- {round(np.std(cvscores),3)})')    
Fold 1 accuracy: 67.532%
Fold 2 accuracy: 62.338%
Fold 3 accuracy: 70.13%
Fold 4 accuracy: 72.727%
Fold 5 accuracy: 74.026%
Fold 6 accuracy: 68.831%
Fold 7 accuracy: 66.234%
Fold 8 accuracy: 74.026%
Fold 9 accuracy: 75.0%
Fold 10 accuracy: 65.789%

accuracy: 69.663%, (+/- 4.023)

Keras Metrics and Custom Metrics

The Keras library provides a way to calculate and report on a suite of standard metrics when training deep learning models.

In addition to offering standard metrics for classification and regression problems, Keras also allows you to define and report on your own custom metrics when training deep learning models. This is particularly useful if you want to keep track of a performance measure that better captures the skill of your model during training.

Keras Metrics

Keras allows you to list the metrics to monitor during the training of your model.

You can do this by specifying the metrics argument and providing a list of function names (or function name aliases) to the compile() function on your model.

The specific metrics that you list can be the names of Keras functions (like mean_squared_error) or string aliases for those functions (like mse).

Metric values are recorded at the end of each epoch on the training dataset. If a validation dataset is also provided, then the metric recorded is also calculated for the validation dataset.

All metrics are reported in verbose output and in the history object returned from calling the fit() function. In both cases, the name of the metric function is used as the key for the metric values. In the case of metrics for the validation dataset, the “val_” prefix is added to the key.

Both loss functions and explicitly defined Keras metrics can be used as training metrics.

Keras Regression Metrics

Metrics that you can use in Keras on regression problems.

  • Mean Squared Error: mean_squared_error, MSE or mse
  • Mean Absolute Error: mean_absolute_error, MAE, mae
  • Mean Absolute Percentage Error: mean_absolute_percentage_error, MAPE, mape
  • Cosine Proximity: cosine_proximity, cosine

The example below demonstrates these 4 built-in regression metrics on a simple contrived regression problem.

In [17]:
import matplotlib.pyplot as plt

# prepare sequence
X = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])

# create model
model = Sequential()
model.add(Dense(2, input_dim=1))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape', 'cosine'])

# train model
history = model.fit(X, X, epochs=500, batch_size=len(X), verbose=0)
In [46]:
keys = list(history.history.keys())

# plot metrics
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(13,4))
ax[0].plot(history.history[keys[1]], label=keys[1])
ax[0].plot(history.history[keys[2]], label=keys[2])
ax[1].plot(history.history[keys[3]], label=keys[3], color='g')
ax[2].plot(history.history[keys[4]], label=keys[4], color='r')
ax[0].legend()
ax[1].legend()
ax[2].legend()
plt.tight_layout()
plt.show()

Keras Classification Metrics

List of metrics that you can use in Keras for classification problems.

  • Binary Accuracy: binary_accuracy, acc
  • Categorical Accuracy: categorical_accuracy, acc
  • Sparse Categorical Accuracy: sparse_categorical_accuracy
  • Top k Categorical Accuracy: top_k_categorical_accuracy (requires you specify a k parameter)
  • Sparse Top k Categorical Accuracy: sparse_top_k_categorical_accuracy (requires you specify a k parameter)

Regardless of whether your problem is a binary or multi-class classification problem, you can specify the accuracy metric to report on accuracy.

Below is an example of a binary classification problem with the built-in accuracy metric demonstrated.

In [52]:
# Prepare sequence
X = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])

# create model
model = Sequential()
model.add(Dense(2, input_dim=1))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# train model
history = model.fit(X, y, epochs=400, batch_size=len(X), verbose=0)

# plot metrics
plt.plot(history.history['accuracy'])
plt.show()

Custom Metrics in Keras

You can also define your own metrics and specify the function name in the list of functions for the metrics argument when calling the compile() function.

A metric for regression not available in the API that is good to keep track of is Root Mean Square Error, or RMSE.

You can get an idea of how to write a custom metric by examining the code for an existing metric.

In [53]:
import keras.backend as k

# Example for the mean_squared_error loss function and metric in Keras.
def mean_squared_error(y_true, y_pred):
    return K.mean(K.square(y_pred - y_true), axis=-1)

$K$ is the backend used by Keras.

From this example and other examples of loss functions and metrics, the approach is to use standard math functions on the backend to calculate the metric of interest.

For example, we can write a custom metric to calculate RMSE as follows:

In [54]:
def rmse(y_true, y_pred):
    return k.sqrt(k.mean(k.square(y_pred - y_true), axis=-1))

You can see the function is the same code as MSE with the addition of the sqrt() wrapping the result.

We can test this in our regression example as follows. Note that we simply list the function name directly rather than providing it as a string or alias for Keras to resolve.

In [58]:
# prepare sequence
X = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])

# create model
model = Sequential()
model.add(Dense(2, input_dim=1, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam', metrics=[rmse])

# train model
history = model.fit(X, X, epochs=500, batch_size=len(X), verbose=0)

# plot metrics
plt.plot(history.history['rmse'], label='RMSE')
plt.legend()
plt.show()

How to use Learning Curves to Diagnose Model Performance

A learning curve is a plot of model learning performance over experience or time.

Learning curves are a widely used diagnostic tool in machine learning for algorithms that learn from a training dataset incrementally. The model can be evaluated on the training dataset and on a hold out validation dataset after each update during training and plots of the measured performance can created to show learning curves.

Reviewing learning curves of models during training can be used to diagnose problems with learning, such as an underfit or overfit model, as well as whether the training and validation datasets are suitably representative.

Generally, a learning curve is a plot that shows time or experience on the x-axis and learning or improvement on the y-axis.

Types of Curves

  • Train Learning Curve: Learning curve calculated from the training dataset that gives an idea of how well the model is learning.

  • Validation Learning Curve: Learning curve calculated from a hold-out validation dataset that gives an idea of how well the model is generalizing.

Diagnosing Model Behavior

The shape and dynamics of a learning curve can be used to diagnose the behavior of a machine learning model and in turn perhaps suggest at the type of configuration changes that may be made to improve learning and/or performance.

The $3$ common dynamics that you are likely to observe in learning curves:

  • Underfit.
  • Overfit.
  • Good Fit.

We will take a closer look at each with examples. The examples will assume that we are looking at a minimizing metric, meaning that smaller relative scores on the y-axis indicate better learning.

Underfit Learning Curves

Underfitting refers to a model that cannot learn the training dataset.

An underfit model can be identified from the learning curve of the training loss only.

It may show a flat line or noisy values of relatively high loss, indicating that the model was unable to learn the training dataset at all.

An example of this is provided below and is common when the model does not have a suitable capacity (nodes and layers) for the complexity of the dataset.

Example of Training Learning Curve Showing An Underfit Model That Does Not Have Sufficient Capacity

Alt text that describes the graphic

An underfit model may also be identified by a training loss that is decreasing and continues to decrease at the end of the plot.

This indicates that the model is capable of further learning and possible further improvements and that the training process was halted prematurely.

Example of Training Learning Curve Showing an Underfit Model That Requires Further Training

Alt text that describes the graphic

A plot of learning curves shows underfitting if:

  • The training loss remains flat regardless of training.
  • The training loss continues to decrease until the end of training.

Overfit Learning Curves

Overfitting refers to a model that has learned the training dataset too well, including the statistical noise or random fluctuations in the training dataset.

The problem with overfitting, is that the more specialized the model becomes to training data, the less well it is able to generalize to new data, resulting in an increase in generalization error. This increase in generalization error can be measured by the performance of the model on the validation dataset.

This often occurs if the model has more capacity (layers and neurons) than is required for the problem, and, in turn, too much flexibility. It can also occur if the model is trained for too long.

A plot of learning curves shows overfitting if:

  • The plot of training loss continues to decrease with experience.
  • The plot of validation loss decreases to a point and begins increasing again.

The inflection point in validation loss may be the point at which training could be halted as experience after that point shows the dynamics of overfitting.

The example plot below demonstrates a case of overfitting.

Alt text that describes the graphic

Good Fit Learning Curves

A good fit is the goal of the learning algorithm and exists between an overfit and underfit model.

A good fit is identified by a training and validation loss that decreases to a point of stability with a minimal gap between the two final loss values.

The loss of the model will almost always be lower on the training dataset than the validation dataset. This means that we should expect some gap between the train and validation loss learning curves. This gap is referred to as the generalization gap.

A plot of learning curves shows a good fit if:

  • The plot of training loss decreases to a point of stability.
  • The plot of validation loss decreases to a point of stability and has a small gap with the training loss.

Continued training of a good fit will likely lead to an overfit.

The example plot below demonstrates a case of a good fit.

Alt text that describes the graphic

Diagnosing Unrepresentative Datasets

Learning curves can also be used to diagnose properties of a dataset and whether it is relatively representative.

An unrepresentative dataset means a dataset that may not capture the statistical characteristics relative to another dataset drawn from the same domain, such as between a train and a validation dataset. This can commonly occur if the number of samples in a dataset is too small, relative to another dataset.

There are two common cases that could be observed; they are:

  • Training dataset is relatively unrepresentative.
  • Validation dataset is relatively unrepresentative.

Unrepresentative Train Dataset

An unrepresentative training dataset means that the training dataset does not provide sufficient information to learn the problem, relative to the validation dataset used to evaluate it.

This may occur if the training dataset has too few examples as compared to the validation dataset.

This situation can be identified by a learning curve for training loss that shows improvement and similarly a learning curve for validation loss that shows improvement, but a large gap remains between both curves.

Example of Train and Validation Learning Curves Showing a Training Dataset That May Be too Small Relative to the Validation Dataset

Alt text that describes the graphic

Unrepresentative Validation Dataset

An unrepresentative validation dataset means that the validation dataset does not provide sufficient information to evaluate the ability of the model to generalize.

This may occur if the validation dataset has too few examples as compared to the training dataset.

This case can be identified by a learning curve for training loss that looks like a good fit (or other fits) and a learning curve for validation loss that shows noisy movements around the training loss.

Example of Train and Validation Learning Curves Showing a Validation Dataset That May Be too Small Relative to the Training Dataset

Alt text that describes the graphic

It may also be identified by a validation loss that is lower than the training loss. In this case, it indicates that the validation dataset may be easier for the model to predict than the training dataset.

Example of Train and Validation Learning Curves Showing a Validation Dataset That Is Easier to Predict Than the Training Dataset

Alt text that describes the graphic

In [ ]: