53. TensorFlow Automotive Assessment Classification#
53.1. Introduction#
The previous experiments have learned how to build an artificial neural network using TensorFlow. In this challenge, you will independently complete an open classification prediction exercise.
53.2. Key Points#
Building neural networks with TensorFlow
Tensor data processing and transformation
Loss functions, optimizers
In the experiment of building an artificial neural network with TensorFlow, you have learned the general steps of building a neural network using TensorFlow. In this challenge, the UCI Car Evaluation Dataset is provided. You need to build a neural network by yourself using this dataset and finally complete the training.
First, import this dataset and perform simple processing.
wget -nc https://cdn.aibydoing.com/aibydoing/files/car.data
import pandas as pd
# 加载数据集
df = pd.read_csv("car.data", header=None)
# 设置列名
df.columns = ["buying", "maint", "doors", "persons", "lug_boot", "safety", "class"]
df.head()
buying | maint | doors | persons | lug_boot | safety | class | |
---|---|---|---|---|---|---|---|
0 | vhigh | vhigh | 2 | 2 | small | low | unacc |
1 | vhigh | vhigh | 2 | 2 | small | med | unacc |
2 | vhigh | vhigh | 2 | 2 | small | high | unacc |
3 | vhigh | vhigh | 2 | 2 | med | low | unacc |
4 | vhigh | vhigh | 2 | 2 | med | med | unacc |
The dataset contains 6 features and a
class
target column. The features represent the price and some
technical specifications of the car, while the target is the
safety assessment of the samples, which is a
multi-classification dataset.
The challenge first divides the dataset into a training set and a test set:
from sklearn.model_selection import train_test_split
X = df.iloc[:, :-1] # 特征
y = df["class"] # 目标
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
((1382, 6), (346, 6), (1382,), (346,))
Next, you will independently complete the construction of the neural network and finally obtain the classification accuracy result.
Exercise 53.1
Open-ended Challenge
Challenge: Use TensorFlow to build a reasonable fully-connected artificial neural network to complete the classification task of car safety assessment.
Specification: You need to use the TensorFlow functions and methods learned before to complete the construction, training, prediction, and evaluation of the network. You can choose the data processing method, neural network structure, loss function, optimization method, etc. independently.少量使用其他类库提供的函数及操作 is allowed for non-main code such as data preprocessing.
The open-ended challenge has no thought-guiding process, offering greater freedom and flexibility, and better exercising the ability to solve problems independently. This may involve some knowledge points outside the curriculum, requiring you to have a certain spirit of exploration and the ability to solve problems independently.
Solution to Exercise 53.1
X_train = pd.get_dummies(X_train).values
X_test = pd.get_dummies(X_test).values
y_train = pd.get_dummies(y_train).values
y_test = pd.get_dummies(y_test).values
X_train.shape, X_test.shape, y_train.shape, y_test.shape
import tensorflow as tf
def fully_connected(inputs, weights, biases):
"""
inputs -- Input Variable
weights -- Weight term Variable
biases -- Intercept term Variable
"""
layer = tf.add(tf.matmul(inputs, weights), biases) # Input x weight + intercept
output = tf.nn.relu(layer) # RELU activation
return output
x = tf.placeholder(tf.float32, [None, 21]) # Input feature tensor placeholder
# Fully connected layer 1
W1 = tf.Variable(tf.random.uniform([21, 15])) # Randomly initialize weights
b1 = tf.Variable(tf.random.uniform([15]))
fc1 = fully_connected(x, W1, b1)
# Fully connected layer 2
W2 = tf.Variable(tf.random.uniform([15, 4]))
b2 = tf.Variable(tf.random.uniform([4]))
outs = fully_connected(fc1, W2, b2)
outs # Output
y = tf.placeholder(tf.float32, [None, 4]) # True value label placeholder
# Cross-entropy loss function, the purpose of reduce_mean is to average the calculation results of each sample
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(logits=outs, labels=y))
loss
train_step = tf.train.AdamOptimizer(0.01).minimize(loss)
train_step
acc = tf.reduce_mean(tf.cast(tf.math.in_top_k(
outs, tf.math.argmax(y, 1), k=1), tf.float32)) # Accuracy calculation
iters = 1000 # Number of iterations
feed_dict_train = {x: X_train, y: y_train} # Training data
feed_dict_test = {x: X_test, y: y_test} # Test data
init = tf.global_variables_initializer() # Initialize global variables
with tf.Session() as sess:
sess.run(init)
for i in range(iters):
if (i+1) % 100 == 0: # Print the loss value every 100 intervals
print("Iters [{}/{}], Train Acc [{:.3f}], Test Acc [{:.3f}]".format(
i+1, iters, acc.eval(feed_dict=feed_dict_train), acc.eval(feed_dict=feed_dict_test)))
sess.run(train_step, feed_dict=feed_dict_train)
Finally, the challenge expects to obtain the test set accuracy and loss results under a reasonable number of iterations. An example is as follows:
Expected Output
Epoch [000/500], Accuracy: [0.17], Loss: [9.9333]
Epoch [100/500], Accuracy: [0.93], Loss: [0.1228]
Epoch [200/500], Accuracy: [0.93], Loss: [0.1056]
Epoch [300/500], Accuracy: [0.93], Loss: [0.1025]
Epoch [400/500], Accuracy: [0.94], Loss: [0.1003]
Epoch [500/500], Accuracy: [0.94], Loss: [0.0992]
Related Links