本节使用数据集为手写识别MNIST数据集,在Alink教程的第13章已经对该数据集进行了详细介绍,而且尝试了几种常用的非深度多分类器,并对分类效果进行了对比,感兴趣的读者可以参阅,这里不再重复。本节的重点是演示如何使用深度神经网络(DNN)和卷积神经网络(CNN)进行图片分类。
数据相关的设置如下,并设置AlinkGlobalConfiguration.setPrintProcessInfo(True),打印输出各组件的运行信息。
Chap13_DATA_DIR = ROOT_DIR + "mnist" + os.sep Chap13_DENSE_TRAIN_FILE = "dense_train.ak" Chap13_DENSE_TEST_FILE = "dense_test.ak" PIPELINE_TF_MODEL = "pipeline_tf_model.ak" PIPELINE_PYTORCH_MODEL = "pipeline_pytorch_model.ak" AlinkGlobalConfiguration.setPrintProcessInfo(True)
在原始训练、测试集中,每个图像数据被存成一个向量,我们可以使用常用的Softmax算法进行多分类,具体代码如下:
def softmax(train_set, test_set) : Pipeline()\ .add(\ Softmax()\ .setVectorCol("vec")\ .setLabelCol("label")\ .setPredictionCol("pred")\ )\ .fit(train_set)\ .transform(test_set)\ .link(\ EvalMultiClassBatchOp()\ .setLabelCol("label")\ .setPredictionCol("pred")\ .lazyPrintMetrics()\ ) BatchOperator.execute()
如下代码所示,指定训练、预测集,运行。
train_set = AkSourceBatchOp().setFilePath(Chap13_DATA_DIR + Chap13_DENSE_TRAIN_FILE) test_set = AkSourceBatchOp().setFilePath(Chap13_DATA_DIR + Chap13_DENSE_TEST_FILE) softmax(train_set, test_set)
输出评估结果如下,我们会将此作为baseline,后面构造深度模型提升效果。
-------------------------------- Metrics: -------------------------------- Accuracy:0.9224 Macro F1:0.9213 Micro F1:0.9224 Kappa:0.9137 |Pred\Real| 9| 8| 7|...| 2| 1| 0| |---------|---|---|---|---|---|----|---| | 9|922| 11| 31|...| 4| 0| 0| | 8| 9|859| 3|...| 37| 10| 2| | 7| 23| 11|945|...| 8| 1| 3| | ...|...|...|...|...|...| ...|...| | 2| 2| 5| 24|...|915| 6| 1| | 1| 7| 10| 8|...| 10|1112| 0| | 0| 6| 9| 2|...| 11| 0|954|
我们尝试使用深度神经网络构建模型,在输入层和输出层之间添加2个全连接层,分别有256个和128个神经元。网络描述如下:
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= tensor (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense (Dense) (None, 256) 200960 _________________________________________________________________ dense_1 (Dense) (None, 128) 32896 _________________________________________________________________ logits (Dense) (None, 10) 1290 ================================================================= Total params: 235,146 Trainable params: 235,146 Non-trainable params: 0 _________________________________________________________________
在Alink中实现该网络很简单,可以在Pipeline中使用Keras 分类器组件(KerasSequentialClassifier),通过setLayers方法,设置深度神经网络;由特征维度数和分类类别数,组件会自动设置输入层(784 bins)和输出层(10 bins)。注意:原始向量中各个分量的取值范围为[0, 255],在使用深度学习模型前,将每个分量除以255,使得每个分量的取值范围变为[0, 1]。
def dnn(train_set, test_set) : Pipeline()\ .add( VectorFunction()\ .setSelectedCol("vec")\ .setFuncName("Scale")\ .setWithVariable(1.0 / 255.0) )\ .add(\ VectorToTensor()\ .setTensorDataType("float")\ .setSelectedCol("vec")\ .setOutputCol("tensor")\ .setReservedCols(["label"])\ )\ .add(\ KerasSequentialClassifier()\ .setTensorCol("tensor")\ .setLabelCol("label")\ .setPredictionCol("pred")\ .setLayers([ "Dense(256, activation='relu')", "Dense(128, activation='relu')" ])\ .setNumEpochs(50)\ .setBatchSize(512)\ .setValidationSplit(0.1)\ .setSaveBestOnly(True)\ .setBestMetric("sparse_categorical_accuracy")\ .setNumWorkers(1)\ .setNumPSs(0)\ )\ .fit(train_set)\ .transform(test_set)\ .link(\ EvalMultiClassBatchOp()\ .setLabelCol("label")\ .setPredictionCol("pred")\ .lazyPrintMetrics()\ ) BatchOperator.execute()
如下代码所示,指定训练、预测集,运行。
train_set = AkSourceBatchOp().setFilePath(Chap13_DATA_DIR + Chap13_DENSE_TRAIN_FILE) test_set = AkSourceBatchOp().setFilePath(Chap13_DATA_DIR + Chap13_DENSE_TEST_FILE) dnn(train_set, test_set)
计算出模型评估指标如下,相对于Softmax方法有明显提升。
-------------------------------- Metrics: -------------------------------- Accuracy:0.9795 Macro F1:0.9794 Micro F1:0.9795 Kappa:0.9772 |Pred\Real| 9| 8| 7|...| 2| 1| 0| |---------|---|---|----|---|----|----|---| | 9|983| 4| 7|...| 0| 0| 1| | 8| 3|947| 4|...| 6| 3| 2| | 7| 3| 3|1000|...| 4| 1| 1| | ...|...|...| ...|...| ...| ...|...| | 2| 0| 3| 9|...|1010| 3| 0| | 1| 2| 0| 3|...| 0|1125| 0| | 0| 1| 4| 2|...| 3| 0|970|
本节侧重卷积神经网络(CNN)的实现,关于CNN的基本概念和理论不展开介绍。这里主要使用2维卷积层和2维最大池化层构建了卷积神经网络,详细网络描述如下:
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= tensor (InputLayer) [(None, 28, 28)] 0 _________________________________________________________________ reshape (Reshape) (None, 28, 28, 1) 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 1600) 0 _________________________________________________________________ dropout (Dropout) (None, 1600) 0 _________________________________________________________________ logits (Dense) (None, 10) 16010 ================================================================= Total params: 34,826 Trainable params: 34,826 Non-trainable params: 0 _________________________________________________________________
可以在Pipeline中使用Keras 分类器组件(KerasSequentialClassifier),通过setLayers方法,设置卷积神经网络。
def cnn(train_set, test_set) : Pipeline()\ .add( VectorFunction()\ .setSelectedCol("vec")\ .setFuncName("Scale")\ .setWithVariable(1.0 / 255.0) )\ .add(\ VectorToTensor()\ .setTensorDataType("float")\ .setTensorShape([28, 28])\ .setSelectedCol("vec")\ .setOutputCol("tensor")\ .setReservedCols(["label"])\ )\ .add(\ KerasSequentialClassifier()\ .setTensorCol("tensor")\ .setLabelCol("label")\ .setPredictionCol("pred")\ .setLayers([ "Reshape((28, 28, 1))", "Conv2D(32, kernel_size=(3, 3), activation='relu')", "MaxPooling2D(pool_size=(2, 2))", "Conv2D(64, kernel_size=(3, 3), activation='relu')", "MaxPooling2D(pool_size=(2, 2))", "Flatten()", "Dropout(0.5)" ])\ .setNumEpochs(20)\ .setValidationSplit(0.1)\ .setSaveBestOnly(True)\ .setBestMetric("sparse_categorical_accuracy")\ .setNumWorkers(1)\ .setNumPSs(0)\ )\ .fit(train_set)\ .transform(test_set)\ .link(\ EvalMultiClassBatchOp()\ .setLabelCol("label")\ .setPredictionCol("pred")\ .lazyPrintMetrics()\ ) BatchOperator.execute()
如下代码所示,指定训练、预测集,运行。
train_set = AkSourceBatchOp().setFilePath(Chap13_DATA_DIR + Chap13_DENSE_TRAIN_FILE) test_set = AkSourceBatchOp().setFilePath(Chap13_DATA_DIR + Chap13_DENSE_TEST_FILE) cnn(train_set, test_set)
计算出模型评估指标如下,和前面的深度模型相比,效果进一步提升。
-------------------------------- Metrics: -------------------------------- Accuracy:0.9918 Macro F1:0.9918 Micro F1:0.9918 Kappa:0.9909 |Pred\Real| 9| 8| 7|...| 2| 1| 0| |---------|---|---|----|---|----|----|---| | 9|995| 2| 1|...| 0| 0| 0| | 8| 3|965| 1|...| 2| 1| 1| | 7| 2| 1|1016|...| 3| 0| 1| | ...|...|...| ...|...| ...| ...|...| | 2| 0| 1| 6|...|1026| 3| 0| | 1| 2| 0| 3|...| 0|1131| 1| | 0| 1| 2| 0|...| 1| 0|977|