Example of constructing a neural network

Time:2023-9-19

neural network

A neural network is a bionic machine learning algorithm inspired by the nervous system of the human brain. It consists of multiple neurons (or nodes) that form a complex network structure by connecting weights, which are used to learn and extract features from input data and are used for tasks such as classification, regression, and clustering.
Note: This code is used to train a neural network with network fit y = x^2-0.5+noise, which is structured with one neuron in the input layer, ten neurons in the hidden layer, and one neuron in the output layer

1. Import of relevant libraries

# Import relevant libraries
import tensorflow as tf # used to construct neural networks
import numpy as np # module for constructing data structures and processing data

This code uses two Python modules:

  1. tensorflow: This is Google’s open-source machine learning framework for constructing neural networks and training models.

  2. numpy: This is the base library for matrix/array operations in Python, for constructing data structures and processing data.

Specifically:

  • import tensorflow as tf Importing the TensorFlow library and giving it an aliastf
  • import numpy as np Importing the NumPy library and giving it an aliasnp

2. Defining a layer

# Define a layer
def add_layer(inputs, in_size, out_size, activation_function=None):
    # Define a layer,其中inputs为输入,in_size为上一层神经元数,out_size为该层神经元数
    # activation_function is an excitation function
    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
    # Initial weights are better generated randomly, in_size, out_size for that weight dimension
    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
    # Deviation
    Wx_plus_b = tf.matmul(inputs, Weights) + biases
    # matmul is the multiplication of functions in a matrix
    if activation_function is None:
        outputs = Wx_plus_b # If the activation function is null, do not activate, keep data
    else:
        outputs = activation_function(Wx_plus_b)
        # If the activation function is not null, activate and return the activated value
    return outputs # Return the activated values

This code defines a functionadd_layerthat is used to add a layer of neural network.

The parameters in the code are explained below:

  • inputs: Inputs for this layer.
  • in_size: The input dimension of the layer, i.e., the number of neurons in the previous layer.
  • out_size: The output dimension of the layer, i.e., the number of neurons in the layer.
  • activation_function: The activation function used for this layer, which can be empty.

The internal logic of the function:

  • Weights = tf.Variable(tf.random_normal([in_size, out_size])): Define the weights of the layer, using randomly generated normally distributed data of dimension[in_size, out_size]
  • biases = tf.Variable(tf.zeros([1, out_size]) + 0.1): Define the bias of the layer, using an all-0 matrix with dimension[1, out_size], and add 0.1 to avoid 0’s in reviews.
  • Wx_plus_b = tf.matmul(inputs, Weights) + biases: Use matrix multiplication to compute the output of the layer, i.e., the weighted sum plus the bias.
  • if activation_function is None:: If the activation function is null, the result of the weighted sum plus the bias is directly used as the output of the layer.
  • else:: Otherwise, the result of weighting and adding bias is processed as an activation function and the result of the process is used as the output of the layer.
  • return outputs: Returns the output of the layer.

In a nutshell, the function serves to create a neural network layer that takes the inputs and puts them through an operation of weighting and adding bias, and uses an activation function to get the output.

3. Constructing data sets

# Construct some samples to train the neural network
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
# Numbers with values between (-1, 1), 300 in number
noise = np.random.normal(0, 0.05, x_data.shape)
x_data
array([[-1.        ],
          [-0.99331104],
          [-0.98662207],
          [-0.97993311],
          [-0.97324415],
          [-0.96655518],
          [-0.95986622],
          [-0.95317726],
          [-0.94648829],
          [-0.93979933],
          [-0.93311037],
          [-0.9264214 ],
          [-0.91973244],
          [-0.91304348],
          [-0.90635452],
          [-0.89966555],
          [-0.89297659],
          [-0.88628763],
          [-0.87959866],
          [-0.8729097 ],
          [-0.86622074],
          [-0.85953177],
          [-0.85284281],
          [-0.84615385],
          [-0.83946488],
          [-0.83277592],
          [-0.82608696],
          [-0.81939799],
          [-0.81270903],
          [-0.80602007],
          [-0.7993311 ],
          [-0.79264214],
          [-0.78595318],
          [-0.77926421],
          [-0.77257525],
          [-0.76588629],
          [-0.75919732],
          [-0.75250836],
          [-0.7458194 ],
          [-0.73913043],
          [-0.73244147],
          [-0.72575251],
          [-0.71906355],
          [-0.71237458],
          [-0.70568562],
          [-0.69899666],
          [-0.69230769],
          [-0.68561873],
          [-0.67892977],
          [-0.6722408 ],
          [-0.66555184],
          [-0.65886288],
          [-0.65217391],
          [-0.64548495],
          [-0.63879599],
          [-0.63210702],
          [-0.62541806],
          [-0.6187291 ],
          [-0.61204013],
          [-0.60535117],
          [-0.59866221],
          [-0.59197324],
          [-0.58528428],
          [-0.57859532],
          [-0.57190635],
          [-0.56521739],
          [-0.55852843],
          [-0.55183946],
          [-0.5451505 ],
          [-0.53846154],
          [-0.53177258],
          [-0.52508361],
          [-0.51839465],
          [-0.51170569],
          [-0.50501672],
          [-0.49832776],
          [-0.4916388 ],
          [-0.48494983],
          [-0.47826087],
          [-0.47157191],
          [-0.46488294],
          [-0.45819398],
          [-0.45150502],
          [-0.44481605],
          [-0.43812709],
          [-0.43143813],
          [-0.42474916],
          [-0.4180602 ],
          [-0.41137124],
          [-0.40468227],
          [-0.39799331],
          [-0.39130435],
          [-0.38461538],
          [-0.37792642],
          [-0.37123746],
          [-0.36454849],
          [-0.35785953],
          [-0.35117057],
          [-0.34448161],
          [-0.33779264],
          [-0.33110368],
          [-0.32441472],
          [-0.31772575],
          [-0.31103679],
          [-0.30434783],
          [-0.29765886],
          [-0.2909699 ],
          [-0.28428094],
          [-0.27759197],
          [-0.27090301],
          [-0.26421405],
          [-0.25752508],
          [-0.25083612],
          [-0.24414716],
          [-0.23745819],
          [-0.23076923],
          [-0.22408027],
          [-0.2173913 ],
          [-0.21070234],
          [-0.20401338],
          [-0.19732441],
          [-0.19063545],
          [-0.18394649],
          [-0.17725753],
          [-0.17056856],
          [-0.1638796 ],
          [-0.15719064],
          [-0.15050167],
          [-0.14381271],
          [-0.13712375],
          [-0.13043478],
          [-0.12374582],
          [-0.11705686],
          [-0.11036789],
          [-0.10367893],
          [-0.09698997],
          [-0.090301  ],
          [-0.08361204],
          [-0.07692308],
          [-0.07023411],
          [-0.06354515],
          [-0.05685619],
          [-0.05016722],
          [-0.04347826],
          [-0.0367893 ],
          [-0.03010033],
          [-0.02341137],
          [-0.01672241],
          [-0.01003344],
          [-0.00334448],
          [ 0.00334448],
          [ 0.01003344],
          [ 0.01672241],
          [ 0.02341137],
          [ 0.03010033],
          [ 0.0367893 ],
          [ 0.04347826],
          [ 0.05016722],
          [ 0.05685619],
          [ 0.06354515],
          [ 0.07023411],
          [ 0.07692308],
          [ 0.08361204],
          [ 0.090301  ],
          [ 0.09698997],
          [ 0.10367893],
          [ 0.11036789],
          [ 0.11705686],
          [ 0.12374582],
          [ 0.13043478],
          [ 0.13712375],
          [ 0.14381271],
          [ 0.15050167],
          [ 0.15719064],
          [ 0.1638796 ],
          [ 0.17056856],
          [ 0.17725753],
          [ 0.18394649],
          [ 0.19063545],
          [ 0.19732441],
          [ 0.20401338],
          [ 0.21070234],
          [ 0.2173913 ],
          [ 0.22408027],
          [ 0.23076923],
          [ 0.23745819],
          [ 0.24414716],
          [ 0.25083612],
          [ 0.25752508],
          [ 0.26421405],
          [ 0.27090301],
          [ 0.27759197],
          [ 0.28428094],
          [ 0.2909699 ],
          [ 0.29765886],
          [ 0.30434783],
          [ 0.31103679],
          [ 0.31772575],
          [ 0.32441472],
          [ 0.33110368],
          [ 0.33779264],
          [ 0.34448161],
          [ 0.35117057],
          [ 0.35785953],
          [ 0.36454849],
          [ 0.37123746],
          [ 0.37792642],
          [ 0.38461538],
          [ 0.39130435],
          [ 0.39799331],
          [ 0.40468227],
          [ 0.41137124],
          [ 0.4180602 ],
          [ 0.42474916],
          [ 0.43143813],
          [ 0.43812709],
          [ 0.44481605],
          [ 0.45150502],
          [ 0.45819398],
          [ 0.46488294],
          [ 0.47157191],
          [ 0.47826087],
          [ 0.48494983],
          [ 0.4916388 ],
          [ 0.49832776],
          [ 0.50501672],
          [ 0.51170569],
          [ 0.51839465],
          [ 0.52508361],
          [ 0.53177258],
          [ 0.53846154],
          [ 0.5451505 ],
          [ 0.55183946],
          [ 0.55852843],
          [ 0.56521739],
          [ 0.57190635],
          [ 0.57859532],
          [ 0.58528428],
          [ 0.59197324],
          [ 0.59866221],
          [ 0.60535117],
          [ 0.61204013],
          [ 0.6187291 ],
          [ 0.62541806],
          [ 0.63210702],
          [ 0.63879599],
          [ 0.64548495],
          [ 0.65217391],
          [ 0.65886288],
          [ 0.66555184],
          [ 0.6722408 ],
          [ 0.67892977],
          [ 0.68561873],
          [ 0.69230769],
          [ 0.69899666],
          [ 0.70568562],
          [ 0.71237458],
          [ 0.71906355],
          [ 0.72575251],
          [ 0.73244147],
          [ 0.73913043],
          [ 0.7458194 ],
          [ 0.75250836],
          [ 0.75919732],
          [ 0.76588629],
          [ 0.77257525],
          [ 0.77926421],
          [ 0.78595318],
          [ 0.79264214],
          [ 0.7993311 ],
          [ 0.80602007],
          [ 0.81270903],
          [ 0.81939799],
          [ 0.82608696],
          [ 0.83277592],
          [ 0.83946488],
          [ 0.84615385],
          [ 0.85284281],
          [ 0.85953177],
          [ 0.86622074],
          [ 0.8729097 ],
          [ 0.87959866],
          [ 0.88628763],
          [ 0.89297659],
          [ 0.89966555],
          [ 0.90635452],
          [ 0.91304348],
          [ 0.91973244],
          [ 0.9264214 ],
          [ 0.93311037],
          [ 0.93979933],
          [ 0.94648829],
          [ 0.95317726],
          [ 0.95986622],
          [ 0.96655518],
          [ 0.97324415],
          [ 0.97993311],
          [ 0.98662207],
          [ 0.99331104],
          [ 1.        ]])

This code creates a one-dimensional array using the numpy libraryx_data

The parameters in the code are explained below:

  • -1: The minimum value of the number in the array.
  • 1: The maximum value of the number in the array.
  • 300: The number of numbers in the array.
  • [:, np.newaxis]: Transpose an array to a two-dimensional array.

The internal logic of the function:

  • np.linspace(-1, 1, 300): Returns an arrays of 300 numbers ranging from -1 to 1. That is, generate an ndarray array containing 300 numbers distributed between -1 and 1.
  • [:, np.newaxis]: Convert a one-dimensional array into a column vector, i.e., let the shape of the array change from(300,) change into(300, 1)

The final generatedx_data is a two-dimensional array with a first dimension of300The second dimension is1, which means that the number of people who have been given the300 composed of samples, each of which has only one feature.

# Adding noise will be closer to the real situation, the value of the noise is between (0, 0.05), the structure is the same as x_data
y_data = np.square(x_data) - 0.5 + noise
# The structure of y
y_data
array([[ 0.59535036],
          [ 0.46017998],
          [ 0.47144478],
          [ 0.45083795],
          [ 0.58438217],
          [ 0.38570118],
          [ 0.43550029],
          [ 0.40597571],
          [ 0.3357524 ],
          [ 0.35784864],
          [ 0.34530231],
          [ 0.32509701],
          [ 0.25554733],
          [ 0.32300801],
          [ 0.2299959 ],
          [ 0.35472568],
          [ 0.31227671],
          [ 0.30385068],
          [ 0.29413844],
          [ 0.18437787],
          [ 0.28132819],
          [ 0.25605309],
          [ 0.23126361],
          [ 0.23492797],
          [ 0.18381621],
          [ 0.10392937],
          [ 0.13415913],
          [ 0.14043649],
          [ 0.11756826],
          [ 0.12142749],
          [ 0.12400694],
          [ 0.08926307],
          [ 0.15581832],
          [ 0.16541106],
          [-0.02582895],
          [ 0.05924725],
          [-0.04037454],
          [ 0.03799003],
          [ 0.09030832],
          [ 0.05984324],
          [-0.06569464],
          [ 0.07973773],
          [ 0.04297837],
          [ 0.05169557],
          [-0.00096191],
          [-0.02049573],
          [-0.03125322],
          [-0.04545588],
          [-0.02168901],
          [ 0.01657517],
          [-0.04315181],
          [-0.09123519],
          [-0.03292835],
          [-0.1110189 ],
          [-0.08212792],
          [-0.10089535],
          [-0.17406672],
          [-0.10380731],
          [-0.10774072],
          [-0.21283138],
          [-0.09788435],
          [-0.10196452],
          [-0.16439081],
          [-0.15431978],
          [-0.17778307],
          [-0.18428537],
          [-0.17874028],
          [-0.10490738],
          [-0.25076832],
          [-0.16078044],
          [-0.21572183],
          [-0.15624353],
          [-0.19591988],
          [-0.31560742],
          [-0.29593726],
          [-0.26686787],
          [-0.2999804 ],
          [-0.30631065],
          [-0.35305224],
          [-0.31295125],
          [-0.22996255],
          [-0.22837061],
          [-0.27266253],
          [-0.31290802],
          [-0.37188479],
          [-0.20765034],
          [-0.33860431],
          [-0.31135236],
          [-0.25249981],
          [-0.26041048],
          [-0.31486205],
          [-0.30253306],
          [-0.41624795],
          [-0.40053837],
          [-0.29939676],
          [-0.32615377],
          [-0.37377787],
          [-0.32222027],
          [-0.3158838 ],
          [-0.43880087],
          [-0.37510637],
          [-0.46702321],
          [-0.27058091],
          [-0.52885151],
          [-0.4061462 ],
          [-0.4486374 ],
          [-0.37819628],
          [-0.34701947],
          [-0.32454364],
          [-0.3901839 ],
          [-0.43293107],
          [-0.47881173],
          [-0.45280819],
          [-0.49676541],
          [-0.48955669],
          [-0.45898691],
          [-0.37473462],
          [-0.43801531],
          [-0.44793655],
          [-0.57343047],
          [-0.45262969],
          [-0.40719677],
          [-0.45423461],
          [-0.45053051],
          [-0.51046881],
          [-0.41584096],
          [-0.53328545],
          [-0.44766406],
          [-0.50158463],
          [-0.42676031],
          [-0.50552613],
          [-0.36832989],
          [-0.48699296],
          [-0.41614151],
          [-0.6175621 ],
          [-0.48304532],
          [-0.46115021],
          [-0.40948908],
          [-0.42017024],
          [-0.50411757],
          [-0.44530626],
          [-0.46895275],
          [-0.52127771],
          [-0.50064585],
          [-0.42210169],
          [-0.58582837],
          [-0.52049198],
          [-0.45332091],
          [-0.53465815],
          [-0.5385712 ],
          [-0.5654201 ],
          [-0.54471377],
          [-0.48109194],
          [-0.44565732],
          [-0.48112022],
          [-0.46471786],
          [-0.5452149 ],
          [-0.52115601],
          [-0.50234928],
          [-0.54885558],
          [-0.5279981 ],
          [-0.53893795],
          [-0.44286416],
          [-0.45371406],
          [-0.44633111],
          [-0.57535678],
          [-0.62918947],
          [-0.41877124],
          [-0.56263956],
          [-0.51201705],
          [-0.35016007],
          [-0.49188897],
          [-0.55766056],
          [-0.38963378],
          [-0.5038024 ],
          [-0.51949984],
          [-0.45229896],
          [-0.49193029],
          [-0.53472883],
          [-0.48957523],
          [-0.35561181],
          [-0.4622668 ],
          [-0.39177781],
          [-0.43448445],
          [-0.49854629],
          [-0.49843105],
          [-0.47704375],
          [-0.36618194],
          [-0.45177012],
          [-0.41497222],
          [-0.42152064],
          [-0.48996608],
          [-0.43010878],
          [-0.42599962],
          [-0.2841197 ],
          [-0.38992082],
          [-0.43802592],
          [-0.42448799],
          [-0.29514676],
          [-0.37154091],
          [-0.25426219],
          [-0.44610678],
          [-0.37120566],
          [-0.3531599 ],
          [-0.34606119],
          [-0.29637877],
          [-0.3693284 ],
          [-0.36651142],
          [-0.30025118],
          [-0.31443603],
          [-0.40824064],
          [-0.31734053],
          [-0.40807378],
          [-0.33792031],
          [-0.22414921],
          [-0.37707072],
          [-0.26776417],
          [-0.29152204],
          [-0.34066934],
          [-0.19037511],
          [-0.23552614],
          [-0.2144995 ],
          [-0.27628531],
          [-0.27329725],
          [-0.23910513],
          [-0.30009859],
          [-0.30192088],
          [-0.16403744],
          [-0.32546893],
          [-0.25686912],
          [-0.12515146],
          [-0.21483097],
          [-0.12779443],
          [-0.28748063],
          [-0.23782354],
          [-0.16024807],
          [-0.19062672],
          [-0.15066097],
          [-0.19043274],
          [-0.16583211],
          [-0.11201314],
          [-0.05612149],
          [-0.00847256],
          [-0.1429705 ],
          [-0.09595988],
          [-0.09583441],
          [-0.01372838],
          [-0.04818834],
          [-0.11840653],
          [ 0.02184166],
          [-0.07153294],
          [-0.11556547],
          [-0.04731049],
          [-0.10774914],
          [-0.014642  ],
          [-0.01470962],
          [-0.03259555],
          [-0.04194347],
          [ 0.08987345],
          [-0.02027899],
          [ 0.02418433],
          [ 0.04298611],
          [ 0.04130101],
          [ 0.18010436],
          [ 0.15480307],
          [ 0.02719993],
          [ 0.11508363],
          [ 0.04309794],
          [ 0.14060578],
          [ 0.09377926],
          [ 0.13887198],
          [ 0.16148276],
          [ 0.11398259],
          [ 0.27887578],
          [ 0.22775177],
          [ 0.20749998],
          [ 0.22107721],
          [ 0.20854961],
          [ 0.25411644],
          [ 0.26561906],
          [ 0.27540788],
          [ 0.26946028],
          [ 0.2390275 ],
          [ 0.26051795],
          [ 0.34424064],
          [ 0.3240088 ],
          [ 0.38040554],
          [ 0.35717078],
          [ 0.31357911],
          [ 0.43825368],
          [ 0.35709739],
          [ 0.48101049],
          [ 0.36024364],
          [ 0.43253108],
          [ 0.39268334],
          [ 0.41942572],
          [ 0.41196584],
          [ 0.54435941],
          [ 0.49840622],
          [ 0.51627957]])

This code generates the corresponding y_data from the x_data and adds some noise.

The specific implementation first utilizes thenp.square(x_data) commander-in-chief (military)x_data and then subtract a constant0.5, with some noise added at the end, to generate the same result asx_data isomorphicy_data Array.

due tox_data is a two-dimensional array.y_data needs to be the same shape as it is, soy_data It is also a two-dimensional array containing 300 samples and the output value for each sample.

4. Defining the basic model

# Define placeholder to be used to input data to the neural network, where 1 table has only one feature, i.e. the dimension is one-dimensional data
xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1])
# add hidden layer
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, activation_function=None)

# The surrogate functions, reduce_mean for the mean, reduce_sum for the sum, and reduce_indices for the dimensions of the data processing
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),reduction_indices=[1]))

# Pass the cost function to gradient descent with a learning rate of 0.1, which here contains the training of the weights, which will update the weights
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

This code defines a neural network model.

First, use thetf.placeholder Create two placeholdersxs respond in singingys, which are used to input training data and real labels, respectively, where theNone Indicates that the sample size is indeterminate and only determines that the dimension of the data is 1-dimensional.

Next, call theadd_layer function adds a hidden layer with the inputxsThe number of neurons is 10 and the activation function isReLU. Then call it againadd_layer function adds an output layer with the input being the output of the hidden layer, the number of neurons is 1 and the activation function isNone(i.e., no activation function is used).

Then, a cost function is definedloss, which is used to measure the gap between the predicted value and the true label, here mean square error (mean square error) is chosen as the cost function. The specific implementation usestf.square Calculate the gap between the predicted value and the true label for each sample and then use thetf.reduce_mean Calculate the mean of the gaps for all samples.

Finally, use thetf.train.GradientDescentOptimizer Create an optimizer, set the learning rate to 0.1, and call theminimize method to minimize the cost functionloss, where the weights and biases of the neural network are updated and the model is trained so that the predictions are constantly close to the true labels.

5. Variable initialization

# important step
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
# Variable initialization
if int((tf.__version__).split('.')[1]) < 12:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess = tf.Session() # Open TensorFlow
sess.run(init) # perform variable initialization

This code mainly performs TensorFlow initialization operations. Due to TensorFlow versioning issues, the originaltf.initialize_all_variables() is no longer supported and has been replaced withtf.global_variables_initializer(). It then creates atf.Session boyfriendsess, which is used to perform operations defined in TensorFlow.

Finally, the implementationinit The operation performs the initialization of the variables. Here all the previously defined variables (including weights and biases) will be initialized to some random values and used to start training the model.

6. Commencement of training

for i in range(1000): # One thousand iterations of gradient descent
    # training
    sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
    # Execute the gradient descent algorithm and feed the samples to the loss function
    if i % 50 == 0:
        # Output the value of the cost function every 50 iterations
        print(sess.run(loss, feed_dict={xs: x_data, ys: y_data}))
0.18214862
0.010138167
0.0071248626
0.0069830194
0.0068635535
0.0067452225
0.006626569
0.0065121166
0.0064035906
0.006295418
0.0061897114
0.0060903295
0.005990808
0.0058959606
0.0058057955
0.0057200184
0.005637601
0.0055605737
0.0054863705
0.005413457

This code is used to train the model and implements the gradient descent process. It loops 1000 times and outputs a cost function every 50 iterationsloss values. In each iteration, the value of thesess.run(train_step, feed_dict={xs: x_data, ys: y_data}) A gradient descent was performed and the samples were passed into the loss function for computation.

The output of this training process can be used to observe how the cost function changes, if the value of the cost function decreases as the iterations proceed, then it means that the model is getting better and better trained and the model is getting better and better at accurately predicting the target variable.

P.S. Series of articles

serial numberArticle CatalogDirect link
1Boston Home Price Forecasthttps://want595.blog.csdn.net/article/details/132181950
2Analysis of the Iris datasethttps://want595.blog.csdn.net/article/details/132182057
3feature processinghttps://want595.blog.csdn.net/article/details/132182165
4cross-validationhttps://want595.blog.csdn.net/article/details/132182238
5Example of constructing a neural networkhttps://want595.blog.csdn.net/article/details/132182341
6Complete linear regression using TensorFlowhttps://want595.blog.csdn.net/article/details/132182417
7Logistic regression done with TensorFlowhttps://want595.blog.csdn.net/article/details/132182496
8TensorBoard Casehttps://want595.blog.csdn.net/article/details/132182584
9Complete linear regression with Kerashttps://want595.blog.csdn.net/article/details/132182723
10Logistic regression using Kerashttps://want595.blog.csdn.net/article/details/132182795
11Cat and Dog Recognition Using Keras Pre-Trained Modelshttps://want595.blog.csdn.net/article/details/132243928
12Training models with PyTorchhttps://want595.blog.csdn.net/article/details/132243989
13Suppressing Overfitting with Dropouthttps://want595.blog.csdn.net/article/details/132244111
14MNIST handwriting recognition done using CNN (TensorFlow)https://want595.blog.csdn.net/article/details/132244499
15MNIST handwriting recognition done using CNN (Keras)https://want595.blog.csdn.net/article/details/132244552
16MNIST handwriting recognition done using CNN (PyTorch)https://want595.blog.csdn.net/article/details/132244641
17Generating handwritten digit samples using GANhttps://want595.blog.csdn.net/article/details/132244764
18natural language processing (NLP)https://want595.blog.csdn.net/article/details/132276591

Recommended Today

uniapp and applet set tabBar and show and hide tabBar

(1) Set the tabBar: uni.setTabberItem({}); wx.setTabberItem({}); indexnumberisWhich item of the tabBar, counting from the left, is indexed from 0.textstringnoButton text on tabiconPathstringnoImage PathselectedIconPathstringnoImage path when selectedpagePathstringnoPage absolute pathvisiblebooleannotab Whether to display uni.setTabBarItem({ index: 0, text: ‘text’, iconPath: ‘/path/to/iconPath’, selectedIconPath: ‘/path/to/selectedIconPath’, pagePath: ‘pages/home/home’ }) wx.setTabBarItem({ index: 0, text: ‘text’, iconPath: ‘/path/to/iconPath’, selectedIconPath: ‘/path/to/selectedIconPath’, pagePath: ‘pages/home/home’ }) […]