theano 多层感知器模型
阅读原文时间:2021年04月24日阅读:1

本节要用Theano实现的结构是一个隐层的多层感知器模型(MLP)。MLP可以看成一种对数回归器,其中输入通过非线性转移矩阵ΦΦ做一个变换处理,以便于把输入数据投影到一个线性可分的空间上。MLP的中间层一般称为隐层。单一的隐层便可以确保MLP全局近似。然而,我们稍后还会看到多隐层的好处,比如在深度学习中的应用。

(本节只要介绍了MLP的实现,对神经网络的背景知识介绍不多,感兴趣的朋友可以进一步阅读相应教程 - 译者注)

MLP模型

MLP模型可以用以下的图来表示:

单隐层的MLP定义了一个映射:

f:RD→RLf:RD→RL

,其中 DD和LL为输入向量和输出向量f(x)f(x)的大小。 

f(x)f(x)的数学表达式为:

f(x)=G(b(2)+W(2)(s(b(1)+W(1)x)))f(x)=G(b(2)+W(2)(s(b(1)+W(1)x)))

其中b1)b1),b(2)b(2)为偏差向量,W(1)W(1),W(2)W(2)为权重向量,GG和ss为激活函数

向量 h(x)=Φ(x)=s(b(1)+W(1)x)h(x)=Φ(x)=s(b(1)+W(1)x) 定义了隐层。 W(1)∈RD×DhW(1)∈RD×Dh为连接输入向量和隐层的权重矩阵。其中每一列表示了输入神经元和一个隐层神经元权重。ss函数的经典选择包括 tanh, tanh(a)=(ea−e−a)/(ea+e−a)tanh(a)=(ea−e−a)/(ea+e−a),或者符号函数 sigmod, sigmoid(a)=1/(1+e−a)sigmoid(a)=1/(1+e−a) 。

模型的输出向量为 o(x)=G(b(2)+W(2)h(x))o(x)=G(b(2)+W(2)h(x)).读者应该记得,该形式在上一节中用过。和之前一样,如果把GG定义为 softmax函数,输出为类的归属概率。

为了训练MLP模型,我们用随机梯度下降算法学习所有参数,包括  θ={W(2),b(2),W(1),b(1)}θ={W(2),b(2),W(1),b(1)}。梯度∂ℓ/∂θ∂ℓ/∂θ可以通过BP算法( backpropagation algorithm)计算。幸运的是,Theano可以自动的计算差分,再次我们不需要操心此细节。

从对数回归模型到多层感知器

本节我们专注于单层的MLP模型。在此,我们首先实现一个表示隐层的类。为了构建MLP模型,我们需要在此之上构建一个对数回归层。

class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, activation=T.tanh):
"""
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).

    NOTE : The nonlinearity used here is tanh

    Hidden unit activation is given by: tanh(dot(input,W) + b)

    :type rng: numpy.random.RandomState
    :param rng: a random number generator used to initialize weights

    :type input: theano.tensor.dmatrix
    :param input: a symbolic tensor of shape (n\_examples, n\_in)

    :type n\_in: int
    :param n\_in: dimensionality of input

    :type n\_out: int
    :param n\_out: number of hidden units

    :type activation: theano.Op or function
    :param activation: Non linearity to be applied in the hidden
                          layer
    """
    self.input = input

隐层权重的初始值需要从一个和激活函数相关的对称区间上面均匀采样得到。对于tanh函数,采样区间应该为[−6fanin+fanout−−−−−−−−−√,6fanin+fanout−−−−−−−−−√][−6fanin+fanout,6fanin+fanout] [Xavier10]. 这里faninfanin和fanoutfanout分别为第(i-1) 和 i层的神经元的数目. 对于sigmoid函数,采样区间为:[−46fanin+fanout−−−−−−−−−√,46fanin+fanout−−−−−−−−−√][−46fanin+fanout,46fanin+fanout]。初始化操作能够保证在训练的前期,每个神经元在激活函数的作用下,信息可以更容易地向下向上两个方向进行传播。

W_values = numpy.asarray(rng.uniform(
low=-numpy.sqrt(6. / (n_in + n_out)),
high=numpy.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)), dtype=theano.config.floatX)
if activation == theano.tensor.nnet.sigmoid:
W_values *= 4

self.W = theano.shared(value=W_values, name='W')

b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, name='b')

这里我们要注意到,隐层的激活函数为一个非线性函数。函数缺省为 tanh,但是很多情况下,我们可能用下面的函数

self.output = activation(T.dot(input, self.W) + self.b)

parameters of the model

self.params = [self.W, self.b]

结合理论知识,这里其实是计算了隐层的输出:h(x)=Φ(x)=s(b(1)+W(1)x)h(x)=Φ(x)=s(b(1)+W(1)x)。如果你把这个值当做LogisticRegression类的输入,正好是上节对数回归分类的内容,而且此时的输出正好是MLP的输出。所以一个MLP的简单实现如下:

class MLP(object):
"""Multi-Layer Perceptron Class

A multilayer perceptron is a feedforward artificial neural network model
that has one layer or more of hidden units and nonlinear activations.
Intermediate layers usually have as activation function tanh or the
sigmoid function (defined here by a ``HiddenLayer`` class) while the
top layer is a softamx layer (defined here by a ``LogisticRegression``
class).
"""

def __init__(self, rng, input, n_in, n_hidden, n_out):
"""Initialize the parameters for the multilayer perceptron

  :type rng: numpy.random.RandomState
  :param rng: a random number generator used to initialize weights

  :type input: theano.tensor.TensorType
  :param input: symbolic variable that describes the input of the
  architecture (one minibatch)

  :type n\_in: int
  :param n\_in: number of input units, the dimension of the space in
  which the datapoints lie

  :type n\_hidden: int
  :param n\_hidden: number of hidden units

  :type n\_out: int
  :param n\_out: number of output units, the dimension of the space in
  which the labels lie

  """

  # Since we are dealing with a one hidden layer MLP, this will
  # translate into a Hidden Layer connected to the LogisticRegression
  # layer
  self.hiddenLayer = HiddenLayer(rng = rng, input = input,
                           n\_in = n\_in, n\_out = n\_hidden,
                           activation = T.tanh)

  # The logistic regression layer gets as input the hidden units
  # of the hidden layer
  self.logRegressionLayer = LogisticRegression(
                              input=self.hiddenLayer.output,
                              n\_in=n\_hidden,
                              n\_out=n\_out)

在本节中,我们仍然采用L1L1和L2L2规则化,因此需要计算两层的权重矩阵的规范化的结果。

# L1 norm ; one regularization option is to enforce L1 norm to

be small

self.L1 = abs(self.hiddenLayer.W).sum() \
+ abs(self.logRegressionLayer.W).sum()

square of L2 norm ; one regularization option is to enforce

square of L2 norm to be small

self.L2_sqr = (self.hiddenLayer.W ** 2).sum() \
+ (self.logRegressionLayer.W ** 2).sum()

negative log likelihood of the MLP is given by the negative

log likelihood of the output of the model, computed in the

logistic regression layer

self.negative_log_likelihood = self.logRegressionLayer.negative_log_likelihood

same holds for the function computing the number of errors

self.errors = self.logRegressionLayer.errors

the parameters of the model are the parameters of the two layer it is

made out of

self.params = self.hiddenLayer.params + self.logRegressionLayer.params

和之前一样,我们用在mini-batch上面的随机梯度下降算法训练模型。这里的区别在于,我们修改损失函数并包括规范化项。 L1_reg 和 L2_reg 为超参数,用以控制规范化项在整个损失函数中的比重。计算损失的函数如下:

# the cost we minimize during training is the negative log likelihood of

the model plus the regularization terms (L1 and L2); cost is expressed

here symbolically

cost = classifier.negative_log_likelihood(y) \
+ L1_reg * L1 \
+ L2_reg * L2_sqr

接下来,模型参数通过梯度更新。这段代码和之前的基本上一样,除了参数多少的差别。

# compute the gradient of cost with respect to theta (stored in params)

the resulting gradients will be stored in a list gparams

gparams = []
for param in classifier.params:
gparam = T.grad(cost, param)
gparams.append(gparam)

specify how to update the parameters of the model as a list of

(variable, update expression) pairs

updates = []

given two list the zip A = [a1, a2, a3, a4] and B = [b1, b2, b3, b4] of

same length, zip generates a list C of same size, where each element

is a pair formed from the two lists :

C = [(a1, b1), (a2, b2), (a3, b3) , (a4, b4)]

for param, gparam in zip(classifier.params, gparams):
updates.append((param, param - learning_rate * gparam))

compiling a Theano function `train_model` that returns the cost, butx

in the same time updates the parameter of the model based on the rules

defined in `updates`

train_model = theano.function(inputs=[index], outputs=cost,
updates=updates,
givens={
x: train_set_x[index * batch_size:(index + 1) * batch_size],
y: train_set_y[index * batch_size:(index + 1) * batch_size]})

功能综合

基于以上基本概念,写一个MLP的类变成了一件非常容易的事情。下面的代码演示了它是如何运作的,其原理和我们之前的对数回归分类器基本一致。

"""
This tutorial introduces the multilayer perceptron using Theano.

A multilayer perceptron is a logistic regressor where
instead of feeding the input to the logistic regression you insert a
intermediate layer, called the hidden layer, that has a nonlinear
activation function (usually tanh or sigmoid) . One can use many such
hidden layers making the architecture deep. The tutorial will also tackle
the problem of MNIST digit classification.

.. math::

f(x) = G( b^{(2)} + W^{(2)}( s( b^{(1)} + W^{(1)} x))),

References:

- textbooks: "Pattern Recognition and Machine Learning" -
             Christopher M. Bishop, section 5

"""
__docformat__ = 'restructedtext en'

import cPickle
import gzip
import os
import sys
import time

import numpy

import theano
import theano.tensor as T

from logistic_sgd import LogisticRegression, load_data

class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
activation=T.tanh):
"""
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).

    NOTE : The nonlinearity used here is tanh

    Hidden unit activation is given by: tanh(dot(input,W) + b)

    :type rng: numpy.random.RandomState
    :param rng: a random number generator used to initialize weights

    :type input: theano.tensor.dmatrix
    :param input: a symbolic tensor of shape (n\_examples, n\_in)

    :type n\_in: int
    :param n\_in: dimensionality of input

    :type n\_out: int
    :param n\_out: number of hidden units

    :type activation: theano.Op or function
    :param activation: Non linearity to be applied in the hidden
                       layer
    """
    self.input = input

    # \`W\` is initialized with \`W\_values\` which is uniformely sampled
    # from sqrt(-6./(n\_in+n\_hidden)) and sqrt(6./(n\_in+n\_hidden))
    # for tanh activation function
    # the output of uniform if converted using asarray to dtype
    # theano.config.floatX so that the code is runable on GPU
    # Note : optimal initialization of weights is dependent on the
    #        activation function used (among other things).
    #        For example, results presented in \[Xavier10\] suggest that you
    #        should use 4 times larger initial weights for sigmoid
    #        compared to tanh
    #        We have no info for other function, so we use the same as
    #        tanh.
    if W is None:
        W\_values = numpy.asarray(rng.uniform(
                low=-numpy.sqrt(6. / (n\_in + n\_out)),
                high=numpy.sqrt(6. / (n\_in + n\_out)),
                size=(n\_in, n\_out)), dtype=theano.config.floatX)
        if activation == theano.tensor.nnet.sigmoid:
            W\_values \*= 4

        W = theano.shared(value=W\_values, name='W', borrow=True)

    if b is None:
        b\_values = numpy.zeros((n\_out,), dtype=theano.config.floatX)
        b = theano.shared(value=b\_values, name='b', borrow=True)

    self.W = W
    self.b = b

    lin\_output = T.dot(input, self.W) + self.b
    self.output = (lin\_output if activation is None
                   else activation(lin\_output))
    # parameters of the model
    self.params = \[self.W, self.b\]

class MLP(object):
"""Multi-Layer Perceptron Class

A multilayer perceptron is a feedforward artificial neural network model
that has one layer or more of hidden units and nonlinear activations.
Intermediate layers usually have as activation function thanh or the
sigmoid function (defined here by a \`\`SigmoidalLayer\`\` class)  while the
top layer is a softamx layer (defined here by a \`\`LogisticRegression\`\`
class).
"""

def \_\_init\_\_(self, rng, input, n\_in, n\_hidden, n\_out):
    """Initialize the parameters for the multilayer perceptron

    :type rng: numpy.random.RandomState
    :param rng: a random number generator used to initialize weights

    :type input: theano.tensor.TensorType
    :param input: symbolic variable that describes the input of the
    architecture (one minibatch)

    :type n\_in: int
    :param n\_in: number of input units, the dimension of the space in
    which the datapoints lie

    :type n\_hidden: int
    :param n\_hidden: number of hidden units

    :type n\_out: int
    :param n\_out: number of output units, the dimension of the space in
    which the labels lie

    """

    # Since we are dealing with a one hidden layer MLP, this will
    # translate into a TanhLayer connected to the LogisticRegression
    # layer; this can be replaced by a SigmoidalLayer, or a layer
    # implementing any other nonlinearity
    self.hiddenLayer = HiddenLayer(rng=rng, input=input,
                                   n\_in=n\_in, n\_out=n\_hidden,
                                   activation=T.tanh)

    # The logistic regression layer gets as input the hidden units
    # of the hidden layer
    self.logRegressionLayer = LogisticRegression(
        input=self.hiddenLayer.output,
        n\_in=n\_hidden,
        n\_out=n\_out)

    # L1 norm ; one regularization option is to enforce L1 norm to
    # be small
    self.L1 = abs(self.hiddenLayer.W).sum() \\
            + abs(self.logRegressionLayer.W).sum()

    # square of L2 norm ; one regularization option is to enforce
    # square of L2 norm to be small
    self.L2\_sqr = (self.hiddenLayer.W \*\* 2).sum() \\
                + (self.logRegressionLayer.W \*\* 2).sum()

    # negative log likelihood of the MLP is given by the negative
    # log likelihood of the output of the model, computed in the
    # logistic regression layer
    self.negative\_log\_likelihood = self.logRegressionLayer.negative\_log\_likelihood
    # same holds for the function computing the number of errors
    self.errors = self.logRegressionLayer.errors

    # the parameters of the model are the parameters of the two layer it is
    # made out of
    self.params = self.hiddenLayer.params + self.logRegressionLayer.params

def test_mlp(learning_rate=0.01, L1_reg=0.00, L2_reg=0.0001, n_epochs=1000,
dataset='../data/mnist.pkl.gz', batch_size=20, n_hidden=500):
"""
Demonstrate stochastic gradient descent optimization for a multilayer
perceptron

This is demonstrated on MNIST.

:type learning\_rate: float
:param learning\_rate: learning rate used (factor for the stochastic
gradient

:type L1\_reg: float
:param L1\_reg: L1-norm's weight when added to the cost (see
regularization)

:type L2\_reg: float
:param L2\_reg: L2-norm's weight when added to the cost (see
regularization)

:type n\_epochs: int
:param n\_epochs: maximal number of epochs to run the optimizer

:type dataset: string
:param dataset: the path of the MNIST dataset file from
             http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz

"""
datasets = load_data(dataset)

train\_set\_x, train\_set\_y = datasets\[0\]
valid\_set\_x, valid\_set\_y = datasets\[1\]
test\_set\_x, test\_set\_y = datasets\[2\]

# compute number of minibatches for training, validation and testing
n\_train\_batches = train\_set\_x.get\_value(borrow=True).shape\[0\] / batch\_size
n\_valid\_batches = valid\_set\_x.get\_value(borrow=True).shape\[0\] / batch\_size
n\_test\_batches = test\_set\_x.get\_value(borrow=True).shape\[0\] / batch\_size

######################
# BUILD ACTUAL MODEL #
######################
print '... building the model'

# allocate symbolic variables for the data
index = T.lscalar()  # index to a \[mini\]batch
x = T.matrix('x')  # the data is presented as rasterized images
y = T.ivector('y')  # the labels are presented as 1D vector of
                    # \[int\] labels

rng = numpy.random.RandomState(1234)

# construct the MLP class
classifier = MLP(rng=rng, input=x, n\_in=28 \* 28,
                 n\_hidden=n\_hidden, n\_out=10)

# the cost we minimize during training is the negative log likelihood of
# the model plus the regularization terms (L1 and L2); cost is expressed
# here symbolically
cost = classifier.negative\_log\_likelihood(y) \\
     + L1\_reg \* classifier.L1 \\
     + L2\_reg \* classifier.L2\_sqr

# compiling a Theano function that computes the mistakes that are made
# by the model on a minibatch
test\_model = theano.function(inputs=\[index\],
        outputs=classifier.errors(y),
        givens={
            x: test\_set\_x\[index \* batch\_size:(index + 1) \* batch\_size\],
            y: test\_set\_y\[index \* batch\_size:(index + 1) \* batch\_size\]})

validate\_model = theano.function(inputs=\[index\],
        outputs=classifier.errors(y),
        givens={
            x: valid\_set\_x\[index \* batch\_size:(index + 1) \* batch\_size\],
            y: valid\_set\_y\[index \* batch\_size:(index + 1) \* batch\_size\]})

# compute the gradient of cost with respect to theta (sotred in params)
# the resulting gradients will be stored in a list gparams
gparams = \[\]
for param in classifier.params:
    gparam = T.grad(cost, param)
    gparams.append(gparam)

# specify how to update the parameters of the model as a list of
# (variable, update expression) pairs
updates = \[\]
# given two list the zip A = \[a1, a2, a3, a4\] and B = \[b1, b2, b3, b4\] of
# same length, zip generates a list C of same size, where each element
# is a pair formed from the two lists :
#    C = \[(a1, b1), (a2, b2), (a3, b3), (a4, b4)\]
for param, gparam in zip(classifier.params, gparams):
    updates.append((param, param - learning\_rate \* gparam))

# compiling a Theano function \`train\_model\` that returns the cost, but
# in the same time updates the parameter of the model based on the rules
# defined in \`updates\`
train\_model = theano.function(inputs=\[index\], outputs=cost,
        updates=updates,
        givens={
            x: train\_set\_x\[index \* batch\_size:(index + 1) \* batch\_size\],
            y: train\_set\_y\[index \* batch\_size:(index + 1) \* batch\_size\]})

###############
# TRAIN MODEL #
###############
print '... training'

# early-stopping parameters
patience = 10000  # look as this many examples regardless
patience\_increase = 2  # wait this much longer when a new best is
                       # found
improvement\_threshold = 0.995  # a relative improvement of this much is
                               # considered significant
validation\_frequency = min(n\_train\_batches, patience / 2)
                              # go through this many
                              # minibatche before checking the network
                              # on the validation set; in this case we
                              # check every epoch

best\_params = None
best\_validation\_loss = numpy.inf
best\_iter = 0
test\_score = 0.
start\_time = time.clock()

epoch = 0
done\_looping = False

while (epoch < n\_epochs) and (not done\_looping):
    epoch = epoch + 1
    for minibatch\_index in xrange(n\_train\_batches):

        minibatch\_avg\_cost = train\_model(minibatch\_index)
        # iteration number
        iter = (epoch - 1) \* n\_train\_batches + minibatch\_index

        if (iter + 1) % validation\_frequency == 0:
            # compute zero-one loss on validation set
            validation\_losses = \[validate\_model(i) for i
                                 in xrange(n\_valid\_batches)\]
            this\_validation\_loss = numpy.mean(validation\_losses)

            print('epoch %i, minibatch %i/%i, validation error %f %%' %
                 (epoch, minibatch\_index + 1, n\_train\_batches,
                  this\_validation\_loss \* 100.))

            # if we got the best validation score until now
            if this\_validation\_loss < best\_validation\_loss:
                #improve patience if loss improvement is good enough
                if this\_validation\_loss < best\_validation\_loss \*  \\
                       improvement\_threshold:
                    patience = max(patience, iter \* patience\_increase)

                best\_validation\_loss = this\_validation\_loss
                best\_iter = iter

                # test it on the test set
                test\_losses = \[test\_model(i) for i
                               in xrange(n\_test\_batches)\]
                test\_score = numpy.mean(test\_losses)

                print(('     epoch %i, minibatch %i/%i, test error of '
                       'best model %f %%') %
                      (epoch, minibatch\_index + 1, n\_train\_batches,
                       test\_score \* 100.))

        if patience <= iter:
                done\_looping = True
                break

end\_time = time.clock()
print(('Optimization complete. Best validation score of %f %% '
       'obtained at iteration %i, with test performance %f %%') %
      (best\_validation\_loss \* 100., best\_iter + 1, test\_score \* 100.))
print >> sys.stderr, ('The code for file ' +
                      os.path.split(\_\_file\_\_)\[1\] +
                      ' ran for %.2fm' % ((end\_time - start\_time) / 60.))

if __name__ == '__main__':
test_mlp()

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器

你可能感兴趣的文章