Logistic regression done with TensorFlow

Time:2023-9-18

Logistic regression done with TensorFlow

TensorFlow is an open source machine learning framework developed by the Google Brain team in 2015. It is widely used in image and speech recognition, natural language processing, recommendation systems, and other fields.

At the heart of TensorFlow is the data flow graph used for computation. In a data flow graph, nodes represent mathematical operations and edges represent tensors (multi-dimensional arrays). Combining operations and data in a data flow graph allows TensorFlow to optimize complex mathematical models while supporting distributed computation.

TensorFlow provides interfaces to multiple programming languages such as Python, C++, Java, Go, etc., allowing developers to more easily build and train deep learning models using TensorFlow. In addition, TensorFlow also has a wealth of tools and libraries, including TensorBoard visualization tools, TensorFlow Serving for production environment model services, Keras high-level package API and so on.

TensorFlow has developed many excellent models such as Convolutional Neural Networks, Recurrent Neural Networks, Generative Adversarial Networks, and so on. These models have produced excellent results in many fields such as image recognition, speech recognition, natural language processing, etc.

In addition to the open-source TensorFlow, Google has also launched Google Cloud ML, a cloud-based machine learning platform based on TensorFlow, which provides users with more convenient services for training and deploying machine learning models.

The most common baseline model for solving classification problems is logistic regression, which is simple and interpretable at the same time, making it very popular, so let’s use tensorflow to complete the construction of this model.

1. Environmental settings

import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

import warnings
warnings.filterwarnings("ignore")

import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import time

2. Data reading

#Load the MNIST handwritten digit collection using the tools that come with tensorflow
mnist = input_data.read_data_sets('./data/mnist', one_hot=True)
Extracting ./data/mnist/train-images-idx3-ubyte.gz
Extracting ./data/mnist/train-labels-idx1-ubyte.gz
Extracting ./data/mnist/t10k-images-idx3-ubyte.gz
Extracting ./data/mnist/t10k-labels-idx1-ubyte.gz
#Check out the data dimensions
mnist.train.images.shape
(55000, 784)
#View target dimensions
mnist.train.labels.shape
(55000, 10)

3. Prepare the placeholder

batch_size = 128
X = tf.placeholder(tf.float32, [batch_size, 784], name='X_placeholder') 
Y = tf.placeholder(tf.int32, [batch_size, 10], name='Y_placeholder')

4. Prepare parameters/weights

w = tf.Variable(tf.random_normal(shape=[784, 10], stddev=0.01), name='weights')
b = tf.Variable(tf.zeros([1, 10]), name="bias")
logits = tf.matmul(X, w) + b

5. Compute the loss function of multiclassified softmax

# Find the cross-entropy loss
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
# Averaging
loss = tf.reduce_mean(entropy)

6. Prepare the optimizer

The optimization here uses stochastic gradient descent, and we can choose an optimizer like AdamOptimizer

learning_rate = 0.01
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

7. Perform the operations defined in the graph in the session.

# of total rounds of iteration
n_epochs = 30

with tf.Session() as sess:
    # You can see the structure of the graph in the Tensorboard #
    writer = tf.summary.FileWriter('../graphs/logistic_reg', sess.graph)

    start_time = time.time()
    sess.run(tf.global_variables_initializer())	
    n_batches = int(mnist.train.num_examples/batch_size)
    for i in range(n_epochs): # iterate this many rounds
        total_loss = 0
        for _ in range(n_batches):
            X_batch, Y_batch = mnist.train.next_batch(batch_size)
            _, loss_batch = sess.run([optimizer, loss], feed_dict={X: X_batch, Y:Y_batch}) 
            total_loss += loss_batch
        print('Average loss epoch {0}: {1}'.format(i, total_loss/n_batches))
    print('Total time: {0} seconds'.format(time.time() - start_time))
    print('Optimization Finished!')

# Test models
    preds = tf.nn.softmax(logits)
    correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y, 1))
    accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32))
    
    n_batches = int(mnist.test.num_examples/batch_size)
    total_correct_preds = 0
    
    for i in range(n_batches):
        X_batch, Y_batch = mnist.test.next_batch(batch_size)
        accuracy_batch = sess.run([accuracy], feed_dict={X: X_batch, Y:Y_batch}) 
        total_correct_preds += accuracy_batch[0]
        
    print('Accuracy {0}'.format(total_correct_preds/mnist.test.num_examples))

    writer.close()
Average loss epoch 0: 0.36748782022571785    
   Average loss epoch 1: 0.2978815356126198    
   Average loss epoch 2: 0.27840628396797845    
   Average loss epoch 3: 0.2783186247437706    
   Average loss epoch 4: 0.2783641471138923    
   Average loss epoch 5: 0.2750668214473413           
   Average loss epoch 6: 0.2687560408126502    
   Average loss epoch 7: 0.2713795114126239    
   Average loss epoch 8: 0.2657588795522154    
   Average loss epoch 9: 0.26322007090686916    
   Average loss epoch 10: 0.26289192279735646    
   Average loss epoch 11: 0.26248606019989873       
   Average loss epoch 12: 0.2604622903056356    
   Average loss epoch 13: 0.26015280702939403    
   Average loss epoch 14: 0.2581879366319496    
   Average loss epoch 15: 0.2590309207117085    
   Average loss epoch 16: 0.2630510463581219    
   Average loss epoch 17: 0.25501730025578767    
   Average loss epoch 18: 0.2547102673000945    
   Average loss epoch 19: 0.258298404375851    
   Average loss epoch 20: 0.2549241428330784    
   Average loss epoch 21: 0.2546788509283866    
   Average loss epoch 22: 0.259556887067837    
   Average loss epoch 23: 0.25428259843365575    
   Average loss epoch 24: 0.25442713139565676    
   Average loss epoch 25: 0.2553852511383159    
   Average loss epoch 26: 0.2503043229415978    
   Average loss epoch 27: 0.25468004046828596    
   Average loss epoch 28: 0.2552785321479633    
   Average loss epoch 29: 0.2506257003663859    
   Total time: 28.603315353393555 seconds    
   Optimization Finished!    
   Accuracy 0.9187

P.S. Series of articles

serial numberArticle CatalogDirect link
1Boston Home Price Forecasthttps://want595.blog.csdn.net/article/details/132181950
2Analysis of the Iris datasethttps://want595.blog.csdn.net/article/details/132182057
3feature processinghttps://want595.blog.csdn.net/article/details/132182165
4cross-validationhttps://want595.blog.csdn.net/article/details/132182238
5Example of constructing a neural networkhttps://want595.blog.csdn.net/article/details/132182341
6Complete linear regression using TensorFlowhttps://want595.blog.csdn.net/article/details/132182417
7Logistic regression done with TensorFlowhttps://want595.blog.csdn.net/article/details/132182496
8TensorBoard Casehttps://want595.blog.csdn.net/article/details/132182584
9Complete linear regression with Kerashttps://want595.blog.csdn.net/article/details/132182723
10Logistic regression using Kerashttps://want595.blog.csdn.net/article/details/132182795
11Cat and Dog Recognition Using Keras Pre-Trained Modelshttps://want595.blog.csdn.net/article/details/132243928
12Training models with PyTorchhttps://want595.blog.csdn.net/article/details/132243989
13Suppressing Overfitting with Dropouthttps://want595.blog.csdn.net/article/details/132244111
14MNIST handwriting recognition done using CNN (TensorFlow)https://want595.blog.csdn.net/article/details/132244499
15MNIST handwriting recognition done using CNN (Keras)https://want595.blog.csdn.net/article/details/132244552
16MNIST handwriting recognition done using CNN (PyTorch)https://want595.blog.csdn.net/article/details/132244641
17Generating handwritten digit samples using GANhttps://want595.blog.csdn.net/article/details/132244764
18natural language processing (NLP)https://want595.blog.csdn.net/article/details/132276591

Recommended Today

uniapp and applet set tabBar and show and hide tabBar

(1) Set the tabBar: uni.setTabberItem({}); wx.setTabberItem({}); indexnumberisWhich item of the tabBar, counting from the left, is indexed from 0.textstringnoButton text on tabiconPathstringnoImage PathselectedIconPathstringnoImage path when selectedpagePathstringnoPage absolute pathvisiblebooleannotab Whether to display uni.setTabBarItem({ index: 0, text: ‘text’, iconPath: ‘/path/to/iconPath’, selectedIconPath: ‘/path/to/selectedIconPath’, pagePath: ‘pages/home/home’ }) wx.setTabBarItem({ index: 0, text: ‘text’, iconPath: ‘/path/to/iconPath’, selectedIconPath: ‘/path/to/selectedIconPath’, pagePath: ‘pages/home/home’ }) […]