I am trying to learn tensorflow and are currently trying to do a simple logistic regression model. Here is my code I stitched together from diffrent examples I could find.
with tf.Session() as sess:
# Training data
input = tf.constant(tra)
target = tf.constant(np.transpose(data[:,1]).astype(np.float64))
# Set model weights
W = tf.Variable(np.random.randn(10, 1).astype(np.float64))
# Construct model
pred = tf.sigmoid(mat)
# Compute the error
yerror = tf.sub(pred, target)
# We are going to minimize the L2 loss. The L2 loss is the sum of the
# squared error for all our estimates of y. This penalizes large errors
# a lot, but small errors only a little.
loss = tf.nn.l2_loss(yerror)
# Gradient Descent
update_weights = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Initializing the variables
for _ in range(50):
# Repeatedly run the operations, updating the TensorFlow variable.
Okay so I did some testing around and found that it had problems with the dimensions of the 'target' variable. I had to specifi that it was a m x 1 matrix (where m is the number of traning examples) this is done with specifying the shape to the constant variable:
target = tf.constant(np.transpose(data[:,1]).astype(np.float64), shape=[m,1])
also the gradiant decent did not do so well until i normalized the features.