Simon Simon - 1 month ago 22
Python Question

Theano logistic regression dimension mismatch

I have the following code to do logistic regression in theano but I keep getting a dimension mismatch error:

inputs = [[0,0], [1,1], [0,1], [1,0]]
outputs = [0, 1, 0, 0]

x = T.dmatrix("x")
y = T.dvector("y")
b = theano.shared(value=1.0, name='b')

alpha = 0.01
training_steps = 30000

w_values = np.asarray(np.random.uniform(low=-1, high=1, size=(2, 1)), dtype=theano.config.floatX)
w = theano.shared(value=w_values, name='w', borrow=True)

hypothesis = T.nnet.sigmoid(T.dot(x, w) + b)
cost = T.sum((y - hypothesis) ** 2)
updates = [
(w, w - alpha * T.grad(cost, wrt=w)),
(b, b - alpha * T.grad(cost, wrt=b))
]

train = theano.function(inputs=[x, y], outputs=[hypothesis, cost], updates=updates)
test = theano.function(inputs=[x], outputs=[hypothesis])

# Training
cost_history = []

for i in range(training_steps):
if (i+1) % 5000 == 0:
print "Iteration #%s: " % str(i+1)
print "Cost: %s" % str(cost)
h, cost = train(inputs, outputs)
cost_history.append(cost)


The error theano gives is:

Input dimension mis-match. (input[0].shape[1] = 4, input[1].shape[1] = 1)
Apply node that caused the error: Elemwise{sub,no_inplace}(InplaceDimShuffle{x,0}.0, Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)].0)
Toposort index: 7
Inputs types: [TensorType(float64, row), TensorType(float64, matrix)]
Inputs shapes: [(1L, 4L), (4L, 1L)]
Inputs strides: [(32L, 8L), (8L, 8L)]
Inputs values: [array([[ 0., 1., 0., 0.]]), array([[ 0.73105858],
[ 0.70988924],
[ 0.68095791],
[ 0.75706749]])]


So the problem, it seems, is that y is being treated as a 1x4, whereas the hypothesis value is a 4x1 so the cost can't be calculated

I've tried reshaping the inputs to a 4x1 with:

outputs = np.array([0, 1, 0, 0]).reshape(4,1)


Which then gives me another dimension related error:

('Bad input argument to theano function with name "F:/test.py:32" at index 1(0-based)', 'Wrong number of dimensions: expected 1, got 2 with shape (4L, 1L).')

Answer

Because in your code, hypothesis is a matrix shaped as n_sample * 1. On the another hand, y is a vector. A dimension dis-match happens. You can either flatten hypothesis or reshape y. The following code works.

inputs = [[0,0], [1,1], [0,1], [1,0]]
outputs = [0, 1, 0, 0]
outputs = np.asarray(outputs, dtype='int32').reshape((len(outputs), 1))

x = T.dmatrix("x")
# y = T.dvector("y")
y = T.dmatrix("y")
b = theano.shared(value=1.0, name='b')

alpha = 0.01
training_steps = 30000

w_values = np.asarray(np.random.uniform(low=-1, high=1, size=(2, 1)), dtype=theano.config.floatX)
w = theano.shared(value=w_values, name='w', borrow=True)

hypothesis = T.nnet.sigmoid(T.dot(x, w) + b)
# hypothesis = T.flatten(hypothesis)
cost = T.sum((y - hypothesis) ** 2)
updates = [
    (w, w - alpha * T.grad(cost, wrt=w)),
    (b, b - alpha * T.grad(cost, wrt=b))
]

train = theano.function(inputs=[x, y], outputs=[hypothesis, cost], updates=updates)
test = theano.function(inputs=[x], outputs=[hypothesis])

# Training
cost_history = []

for i in range(training_steps):
    if (i+1) % 5000 == 0:
        print "Iteration #%s: " % str(i+1)
        print "Cost: %s" % str(cost)
    h, cost = train(inputs, outputs)
    cost_history.append(cost)