Fredrik Viberg Fredrik Viberg - 1 year ago 176
Python Question

logistic regression debugging tensorflow

I am trying to learn tensorflow and are currently trying to do a simple logistic regression model. Here is my code I stitched together from diffrent examples I could find.

with tf.Session() as sess:
# Training data
input = tf.constant(tra)
target = tf.constant(np.transpose(data[:,1]).astype(np.float64))

# Set model weights
W = tf.Variable(np.random.randn(10, 1).astype(np.float64))

# Construct model
pred = tf.sigmoid(mat)

# Compute the error
yerror = tf.sub(pred, target)
# We are going to minimize the L2 loss. The L2 loss is the sum of the
# squared error for all our estimates of y. This penalizes large errors
# a lot, but small errors only a little.
loss = tf.nn.l2_loss(yerror)

# Gradient Descent
update_weights = tf.train.GradientDescentOptimizer(0.05).minimize(loss)

# Initializing the variables

for _ in range(50):
# Repeatedly run the operations, updating the TensorFlow variable.


So the code runs but the error dose not improve after each '' itteration and I have tried with diffrent step sizes.

I wonder if the setup is corret?

I am a bit unsure how to debugg it since the the calculation of everything is done at the run command. The traning data is fine. If some of you could see what I am doing wrong in this whole session build up or give suggestions on how I can debugg this.

Help much appreciated.

Answer Source

Okay so I did some testing around and found that it had problems with the dimensions of the 'target' variable. I had to specifi that it was a m x 1 matrix (where m is the number of traning examples) this is done with specifying the shape to the constant variable:

target = tf.constant(np.transpose(data[:,1]).astype(np.float64), shape=[m,1])

also the gradiant decent did not do so well until i normalized the features.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download