Neural Network Programming with TensorFlow
上QQ阅读APP看书,第一时间看更新

Implementing feedforward networks with images

Now we will look at how to use feedforward networks to classify images. We will be using notMNIST data. The dataset consists of images for nine letters, A to I.

NotMNIST dataset is similar to MNIST dataset but focuses on Alphabets instead of numbers (http://yaroslavvb.blogspot.in/2011/09/notmnist-dataset.html)

We have reduced the original dataset to a smaller version for the training so that you can easily get started. Download the ZIP files and extract them to the folder where the dataset is contained, https://1drv.ms/f/s!Av6fk5nQi2j-kniw-8GtP8sdWejs.

The pickle module of python implements an algorithm for serializing and de-serializing a Python object structure. Pickling is the process in which a Python object hierarchy is converted into a byte stream, unpickling is the inverse operation, where a byte stream is converted back into an object hierarchy. Pickling (and unpickling) is alternatively known as serialization, marshaling, [1] or flattening.

First, we load the images in numpy.ndarray from the following list of folders using the maybe_pickle(..) method:

test_folders = ['./notMNIST_small/A', './notMNIST_small/B', './notMNIST_small/C', './notMNIST_small/D',
'./notMNIST_small/E', './notMNIST_small/F', './notMNIST_small/G', './notMNIST_small/H',
'./notMNIST_small/I', './notMNIST_small/J']
train_folders = ['./notMNIST_large_v2/A', './notMNIST_large_v2/B', './notMNIST_large_v2/C', './notMNIST_large_v2/D',
'./notMNIST_large_v2/E', './notMNIST_large_v2/F', './notMNIST_large_v2/G', './notMNIST_large_v2/H',
'./notMNIST_large_v2/I', './notMNIST_large_v2/J']
maybe_pickle(data_folders, min_num_images_per_class, force=False):

The maybe_pickle uses the load_letter method to load the image to ndarray from a single folder:

def load_letter(folder, min_num_images):
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')

dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Fewer images than expected: %d < %d' %
(num_images, min_num_images))

print('Dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset

The maybe_pickle method is called for two sets of folders, train_folders and test_folders:

train_datasets = maybe_pickle(train_folders, 100)
test_datasets = maybe_pickle(test_folders, 50)

Output is similar to the following screenshot.

The first screenshot shows the dataset_names list variable value:

The following screenshot shows the value of the dataset_names variable for the notMNIST_small dataset:

Next, merge_datasets is called, where pickle files from each character are combined into the following ndarray:

  • valid_dataset
  • valid_labels
  • train_dataset
  • train_labels
train_size = 1000
valid_size = 500
test_size = 500

valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)

Output of the preceding code is listed as follows:

Training dataset and labels shape: (1000, 28, 28) (1000,)
Validation dataset and labels shape: (500, 28, 28) (500,)
Testing dataset and labels shape: (500, 28, 28) (500,)

Finally, the noMNIST.pickle file is created by storing each of these ndarray in key-value pairs where the keys are train_dataset, train_labels, valid_dataset, valid_labels, test_dataset, and test_labels, and values are the respective ndarray, as shown in the following code:


try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise

This is the full code for generating the notMNIST.pickle file:

from __future__ import print_function
import numpy as np
import os
from scipy import ndimage
from six.moves import cPickle as pickle

data_root = '.' # Change me to store data elsewhere

num_classes = 10
np.random.seed(133)

test_folders = ['./notMNIST_small/A', './notMNIST_small/B', './notMNIST_small/C', './notMNIST_small/D',
'./notMNIST_small/E', './notMNIST_small/F', './notMNIST_small/G', './notMNIST_small/H',
'./notMNIST_small/I', './notMNIST_small/J']
train_folders = ['./notMNIST_large_v2/A', './notMNIST_large_v2/B', './notMNIST_large_v2/C', './notMNIST_large_v2/D',
'./notMNIST_large_v2/E', './notMNIST_large_v2/F', './notMNIST_large_v2/G', './notMNIST_large_v2/H',
'./notMNIST_large_v2/I', './notMNIST_large_v2/J']

image_size = 28 # Pixel width and height.
pixel_depth = 255.0

def load_letter(folder, min_num_images):
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')

dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Fewer images than expected: %d < %d' %
(num_images, min_num_images))

print('Dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset

def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
#pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
print(pickle.HIGHEST_PROTOCOL)
pickle.dump(dataset, f, 2)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)

return dataset_names

def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None

Let's look at how the pickle file created earlier loads data and runs a network with one hidden layer.

First, we will load the training, testing, and validation datasets (ndarray) from the notMNIST.pickle file:

with open(pickle_file, 'rb') as f:
save = pickle.load(f)
training_dataset = save['train_dataset']
training_labels = save['train_labels']
validation_dataset = save['valid_dataset']
validation_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']

print 'Training set', training_dataset.shape, training_labels.shape
print 'Validation set', validation_dataset.shape, validation_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape

You will see an output similar to the following listing:

Training set (1000, 28, 28) (1000,)
Validation set (500, 28, 28) (500,)
Test set (500, 28, 28) (500,)

Next, we reformat the dataset into a two-dimensional array so that data is easier to process with TensorFlow:

def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_of_labels) == labels[:, None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(training_dataset, training_labels)
valid_dataset, valid_labels = reformat(validation_dataset, validation_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)

print 'Training dataset shape', train_dataset.shape, train_labels.shape
print 'Validation dataset shape', valid_dataset.shape, valid_labels.shape
print 'Test dataset shape', test_dataset.shape, test_labels.shape

You will see the following output:

Training dataset shape (1000, 784) (1000, 10)
Validation dataset shape (500, 784) (500, 10)
Test dataset shape (500, 784) (500, 10)

Next, we define the graph that will return the content to which all the variables will be loaded.

The size of each weight and bias is listed here, where image_size = 28 and no_of_neurons = 1024.

Number of neurons in the hidden layer should be optimal. Too few neurons leads to lower accuracy, while too high a number leads to overfitting.

We will initialize the TensorFlow graph and initialize placeholder variables from training, validation, and test the datasets and labels.

We will also define weights and biases for two layers:

graph = tf.Graph()
no_of_neurons = 1024
with graph.as_default():
# Placeholder that will be fed
# at run time with a training minibatch in the session
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_of_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)

# Variables.
w1 = tf.Variable(tf.truncated_normal([image_size * image_size, no_of_neurons]))
b1 = tf.Variable(tf.zeros([no_of_neurons]))

w2 = tf.Variable(
tf.truncated_normal([no_of_neurons, num_of_labels]))
b2 = tf.Variable(tf.zeros([num_of_labels]))

Next, we define the hidden layer tensor and the calculated logit:

hidden1 = tf.nn.relu(tf.matmul(tf_train_dataset, w1) + b1)
logits = tf.matmul(hidden1, w2) + b2

Loss function for our network is going to be based on the softmax function applied over cross entropy with logits:

loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
# Training computation.

loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))

# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

Next, we calculate the logits (predicted values); note that we are using softmax to normalize the logits:

train_prediction = tf.nn.softmax(logits)

Calculate the test and validation predictions. Notice the activation function RELU being used here to calculate w1 and b1:

tf.nn.relu(tf.matmul(tf_valid_dataset, w1) + b1)
valid_prediction = tf.nn.softmax(
tf.matmul( tf.nn.relu(tf.matmul(tf_valid_dataset, w1) + b1),
w2
) + b2)
test_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, w1) + b1), w2) + b2)

Now we will create a TensorFlow session and pass the datasets loaded through the neural network created:

with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in xrange(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
minibatch_accuracy = accuracy(predictions, batch_labels)
validation_accuracy = accuracy(
valid_prediction.eval(), valid_labels)
if (step % 10 == 0):
print("Minibatch loss at step", step, ":", l)
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % validation_accuracy)
minibatch_acc.append( minibatch_accuracy)
validation_acc.append( validation_accuracy)
t = [np.array(minibatch_acc)]
t.append(validation_acc)

The complete code can be found at the preceding GitHub link. Notice that we are appending the validation and minibatch accuracy to an array that we will plot:

 print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
title = "NotMNIST DataSet - Single Hidden Layer - 1024 neurons Activation function: RELU"
label = ['Minibatch Accuracy', 'Validation Accuracy']
draw_plot(x, t, title, label)

Let's look at the plot generated by the preceding code:

Minibatch accuracy reaches 100 by iteration number 8- while validation accuracy stops at 60.