0%

机器学习打印训练日志

这里记录下我使用python的sklearn库中的机器学习模型,训练过程中使用打印训练日志的方法。

scikit-learn是Python十分出名的一个算法库,里面包含了非常多的机器学习算法,我们有一个场景需要选用多个机器学习算法同时要在训练过程中进行训练信息的打印,库中的有些算法本身是自带训练日志打印的,但是有些没有,因此我们主要需要解决以下两个问题:

  1. 带训练过程的算法将训练过程打印到指定文件中
  2. 不带训练过程的算法人为在训练过程中打印一些简单的训练内容

带过程算法训练日志打印

启用训练日志

虽然有一些算法是自带训练过程输出的,但是基本默认都是不输出或者只输出很少一部分信息的,这里我们需要查看sklearn库中这个算法的实现注释,看有没有控制训练过程输出的参数,下面举一些例子。

1
2
3
4
5
6
7
8
9
10
# 很多算法的训练过程打印都是通过verbose这个参数控制的,例如
clf = ensemble.HistGradientBoostingClassifier(verbose=1, **hyperparameter)
clf = ExtraTreesClassifier(verbose=3, **hyperparameter)
clf = RandomForestClassifier(verbose=99, **hyperparameter)

# 但也有一些是通过别的参数控制的,还是要自己去看算法实现里面的注释
clf = LocalOutlierFactor(novelty=True, **hyperparameter)

# 甚至Lightgbm还是在fit()函数中控制的
clf.fit(x, y, eval_set=[(x, y)], callbacks=[lgb.log_evaluation(1)])

将训练日志输出到指定文件

这些算法基本都没有参数来接受训练日志输出位置(我遇见的只有一个例外,下面会讲),基本都是将训练日志直接默认输出到命令行,所以我们需要将原本输出到命令行中的内容重定向输出到指定文件中。

这里主要用到sys.stdout,下面先简单介绍下sys.stdout的基本用法

1
2
3
4
5
6
7
8
9
10
11
import sys
print('111')

oldPrint = sys.stdout # 用于后期还原

# 把输出重定向到文件
sys.stdout=open('outfile.log',"a+")
print('222')

sys.stdout.close() # 关闭文件
sys.stdout = oldPrint # 还原输出位置到命令行

这里其实看起来已经重定向的功能已经实现了,我们只需要将print('222')换为我们执行模型训练的内容即可,如下

1
2
3
4
5
6
7
8
9
10
11
12
def train_model(algorithm, clf, x, y, train_log_path):
cmd = sys.stdout
sys.stdout = open(train_log_path, mode='a', encoding='utf-8')
start = time.time()

clf.fit(x, y)

end = time.time()
sys.stdout.close()
sys.stdout = cmd

return start, end

但是我们在实际使用的过程中还是发现了一些问题,就是有些算法使用的时候是没问题的,但是有些算法就不会实时写入,而是整个训练结束了才把内容写入指定文件,我们思考应该是flush的问题,于是重写了下sys.stdoutwirte()flush()方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
class Logger(object):
def __init__(self, filename='default.log', stream=sys.stdout):
self.terminal = stream
self.log = open(filename, 'a', encoding='utf-8')

def write(self, message):
# self.terminal.write(message)
# self.terminal.flush()
self.log.write(message)
self.log.flush()

def flush(self):
pass


def train_model(algorithm, clf, x, y, train_log_path):
cmd = sys.stdout
sys.stdout = Logger(train_log_path, sys.stdout)
# sys.stdout = open(train_log_path, mode='a', encoding='utf-8')
start = time.time()

clf.fit(x, y)

end = time.time()
# sys.stdout.close()
sys.stdout = cmd

return start, end

这样进行修改之后发现所有算法的训练日志都能实时写入到指定文件中。

重定向问题

将训练日志重定向到指定文件说白了就是修改sys.stdout的指向,但是还有极个别的算法内部会有一个专门的参数来指定训练日志输出位置,例如catboost

1
def fit(self, X, y=None, cat_features=None, text_features=None, embedding_features=None, sample_weight=None, baseline=None, use_best_model=None, eval_set=None, verbose=None, logging_level=None, plot=False, column_description=None, verbose_eval=None, metric_period=None, silent=None, early_stopping_rounds=None, save_snapshot=None, snapshot_file=None, snapshot_interval=None, init_model=None, callbacks=None, log_cout=sys.stdout, log_cerr=sys.stderr):

我们可以看到catboost的fit()函数中有一个专门的参数log_cout来控制输出,这种算法在重定向训练日志的时候就需要将我们修改过的sys.stdout传入fit()中。

效果展示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
算法HGB正在进行训练
Binning 0.123 GB of training data: 1.479 s
Binning 0.014 GB of validation data: 0.017 s
Fitting gradient boosted rounds:
[1/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.40411, val loss: 0.40415, in 0.159s
[2/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.34343, val loss: 0.34353, in 0.062s
[3/100] 1 tree, 31 leaves, max depth = 8, train loss: 0.29777, val loss: 0.29790, in 0.058s
[4/100] 1 tree, 31 leaves, max depth = 8, train loss: 0.26076, val loss: 0.26084, in 0.059s
[5/100] 1 tree, 31 leaves, max depth = 8, train loss: 0.23027, val loss: 0.23030, in 0.057s
[6/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.20460, val loss: 0.20460, in 0.056s
[7/100] 1 tree, 31 leaves, max depth = 8, train loss: 0.18270, val loss: 0.18263, in 0.061s
[8/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.16379, val loss: 0.16374, in 0.060s
[9/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.14730, val loss: 0.14720, in 0.060s
[10/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.13293, val loss: 0.13283, in 0.061s
[11/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.12029, val loss: 0.12016, in 0.061s
[12/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.10902, val loss: 0.10894, in 0.063s
[13/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.09799, val loss: 0.09816, in 0.064s
[14/100] 1 tree, 31 leaves, max depth = 8, train loss: 0.08923, val loss: 0.08939, in 0.060s
[15/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.08115, val loss: 0.08130, in 0.061s
[16/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.07382, val loss: 0.07382, in 0.064s
[17/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.06734, val loss: 0.06732, in 0.062s
[18/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.06146, val loss: 0.06148, in 0.064s
[19/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.05623, val loss: 0.05628, in 0.063s
[20/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.05124, val loss: 0.05127, in 0.063s
[21/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.04676, val loss: 0.04682, in 0.066s
[22/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.04273, val loss: 0.04286, in 0.063s
[23/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.03913, val loss: 0.03934, in 0.061s
[24/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.03588, val loss: 0.03615, in 0.064s
[25/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.03303, val loss: 0.03333, in 0.063s
[26/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.03044, val loss: 0.03079, in 0.065s
[27/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.02812, val loss: 0.02852, in 0.067s
[28/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.02533, val loss: 0.02568, in 0.061s
[29/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.02338, val loss: 0.02374, in 0.067s
[30/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.02164, val loss: 0.02198, in 0.062s
[31/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.02003, val loss: 0.02043, in 0.065s
[32/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.01863, val loss: 0.01887, in 0.060s
[33/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.01729, val loss: 0.01762, in 0.064s
[34/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.01607, val loss: 0.01643, in 0.065s
[35/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.01503, val loss: 0.01544, in 0.069s
[36/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.01404, val loss: 0.01454, in 0.068s
[37/100] 1 tree, 31 leaves, max depth = 9, train loss: 0.01317, val loss: 0.01373, in 0.064s
[38/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.01239, val loss: 0.01297, in 0.065s
[39/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.01160, val loss: 0.01222, in 0.064s
[40/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.01092, val loss: 0.01152, in 0.066s
[41/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.01032, val loss: 0.01089, in 0.066s
[42/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00974, val loss: 0.01033, in 0.068s
[43/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00921, val loss: 0.00981, in 0.065s
[44/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00873, val loss: 0.00935, in 0.071s
[45/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.00831, val loss: 0.00895, in 0.067s
[46/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.00792, val loss: 0.00857, in 0.066s
[47/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.00753, val loss: 0.00820, in 0.068s
[48/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00716, val loss: 0.00790, in 0.062s
[49/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.00682, val loss: 0.00760, in 0.063s
[50/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.00650, val loss: 0.00734, in 0.062s
[51/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.00620, val loss: 0.00703, in 0.064s
[52/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.00593, val loss: 0.00676, in 0.067s
[53/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.00572, val loss: 0.00654, in 0.062s
[54/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00553, val loss: 0.00634, in 0.068s
[55/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.00535, val loss: 0.00616, in 0.068s
[56/100] 1 tree, 31 leaves, max depth = 18, train loss: 0.00516, val loss: 0.00598, in 0.061s
[57/100] 1 tree, 31 leaves, max depth = 17, train loss: 0.00497, val loss: 0.00579, in 0.067s
[58/100] 1 tree, 31 leaves, max depth = 17, train loss: 0.00481, val loss: 0.00563, in 0.064s
[59/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00467, val loss: 0.00550, in 0.064s
[60/100] 1 tree, 31 leaves, max depth = 15, train loss: 0.00452, val loss: 0.00537, in 0.066s
[61/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.00440, val loss: 0.00526, in 0.069s
[62/100] 1 tree, 31 leaves, max depth = 15, train loss: 0.00426, val loss: 0.00513, in 0.064s
[63/100] 1 tree, 31 leaves, max depth = 17, train loss: 0.00414, val loss: 0.00502, in 0.070s
[64/100] 1 tree, 31 leaves, max depth = 16, train loss: 0.00403, val loss: 0.00494, in 0.067s
[65/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.00395, val loss: 0.00488, in 0.063s
[66/100] 1 tree, 31 leaves, max depth = 15, train loss: 0.00385, val loss: 0.00479, in 0.067s
[67/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00377, val loss: 0.00473, in 0.065s
[68/100] 1 tree, 31 leaves, max depth = 18, train loss: 0.00367, val loss: 0.00464, in 0.067s
[69/100] 1 tree, 31 leaves, max depth = 19, train loss: 0.00358, val loss: 0.00458, in 0.065s
[70/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.00352, val loss: 0.00453, in 0.060s
[71/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.00344, val loss: 0.00447, in 0.065s
[72/100] 1 tree, 31 leaves, max depth = 16, train loss: 0.00337, val loss: 0.00443, in 0.067s
[73/100] 1 tree, 31 leaves, max depth = 17, train loss: 0.00332, val loss: 0.00439, in 0.063s
[74/100] 1 tree, 31 leaves, max depth = 17, train loss: 0.00324, val loss: 0.00433, in 0.068s
[75/100] 1 tree, 31 leaves, max depth = 15, train loss: 0.00320, val loss: 0.00429, in 0.065s
[76/100] 1 tree, 31 leaves, max depth = 16, train loss: 0.00315, val loss: 0.00423, in 0.065s
[77/100] 1 tree, 31 leaves, max depth = 15, train loss: 0.00310, val loss: 0.00420, in 0.065s
[78/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.00304, val loss: 0.00412, in 0.067s
[79/100] 1 tree, 31 leaves, max depth = 20, train loss: 0.00300, val loss: 0.00409, in 0.065s
[80/100] 1 tree, 31 leaves, max depth = 21, train loss: 0.00296, val loss: 0.00405, in 0.064s
[81/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.00291, val loss: 0.00402, in 0.069s
[82/100] 1 tree, 31 leaves, max depth = 10, train loss: 0.00287, val loss: 0.00397, in 0.066s
[83/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.00282, val loss: 0.00395, in 0.065s
[84/100] 1 tree, 31 leaves, max depth = 15, train loss: 0.00279, val loss: 0.00394, in 0.060s
[85/100] 1 tree, 31 leaves, max depth = 16, train loss: 0.00276, val loss: 0.00392, in 0.065s
[86/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.00272, val loss: 0.00390, in 0.064s
[87/100] 1 tree, 31 leaves, max depth = 18, train loss: 0.00269, val loss: 0.00388, in 0.063s
[88/100] 1 tree, 31 leaves, max depth = 13, train loss: 0.00265, val loss: 0.00387, in 0.061s
[89/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00261, val loss: 0.00384, in 0.066s
[90/100] 1 tree, 31 leaves, max depth = 18, train loss: 0.00259, val loss: 0.00383, in 0.067s
[91/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.00254, val loss: 0.00383, in 0.073s
[92/100] 1 tree, 31 leaves, max depth = 16, train loss: 0.00252, val loss: 0.00381, in 0.063s
[93/100] 1 tree, 31 leaves, max depth = 12, train loss: 0.00249, val loss: 0.00379, in 0.066s
[94/100] 1 tree, 31 leaves, max depth = 17, train loss: 0.00245, val loss: 0.00378, in 0.065s
[95/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00242, val loss: 0.00376, in 0.065s
[96/100] 1 tree, 31 leaves, max depth = 14, train loss: 0.00239, val loss: 0.00375, in 0.064s
[97/100] 1 tree, 31 leaves, max depth = 15, train loss: 0.00236, val loss: 0.00372, in 0.068s
[98/100] 1 tree, 31 leaves, max depth = 17, train loss: 0.00234, val loss: 0.00371, in 0.060s
[99/100] 1 tree, 31 leaves, max depth = 21, train loss: 0.00232, val loss: 0.00370, in 0.068s
[100/100] 1 tree, 31 leaves, max depth = 11, train loss: 0.00230, val loss: 0.00368, in 0.066s
Fit 100 trees in 8.440 s, (3100 total leaves)
Time spent computing histograms: 2.781s
Time spent finding best splits: 0.320s
Time spent applying splits: 0.423s
Time spent predicting: 0.086s
训练完成 训练用时8.475976943969727 Seconds 模型保存在: *****\142-20220605-173439
有监督算法开始进行训练集ACC计算
计算完成 训练集准确率: 0.99939108823808

不带过程算法训练日志打印

这部分算法本身不自带训练日志输出,如果要改源码的话就需要深刻理解每种算法机制,而且sklearn这种成熟的库底层可能是封装好的文件,根本不是py文件,修改源码的难度可能会有些大。 如果只是单纯的为了看到我们的算法确实在运行其实没有必要非要打印和算法机制相关的内容,打印一些算法运行时间,使用训练集这些数据其实也就够了。所以这里我选择使用python多线程在算法训练的时候打印一些简单的信息让使用者能看到我们的算法在运行就行了。

多线程打印训练信息

实际上就是简单使用了python的多线程,没有什么复杂的同步,资源,锁之类的问题,对于我们这个简单的场景来说唯一要思考的就是如何停止线程,也没什么说的,直接看代码吧。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 要在后台运行的任务
def print_train_content(algorithm, start, stop_flag):
while 1:
print('%s算法正在训练,训练时长:%.2f秒' % (algorithm, time.time() - start))
time.sleep(random.uniform(0, 3))
if stop_flag():
break


def train_model(algorithm, clf, x, y, train_log_path):
cmd = sys.stdout
sys.stdout = Logger(train_log_path, sys.stdout)
# sys.stdout = open(train_log_path, mode='a', encoding='utf-8')
start = time.time()

if algorithm in ['CART', 'KNN', 'HBOS', 'OCSVM', 'KDE', 'LOF']:
stop_flag = False
t = Thread(target=print_train_content, args=(algorithm, start, lambda: stop_flag))
t.daemon = 1
t.start()
clf.fit(x, y)
stop_flag = True
else:
clf.fit(x, y)

end = time.time()
# sys.stdout.close()
sys.stdout = cmd

return start, end

上面代码里的线程只是简单使用了打印训练时长,线程打印的内容可以自己随便定制,只要往里面传参数就行了。

我们停止线程的方式主要是通过stop_flag这个标志位来实现的,标志位可以通过全局变量的形式声明,也可以像我这样使用lambda声明为一个函数,这样就显得稍微美观一点点。

效果展示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
算法OCSVM正在进行训练
OCSVM算法正在训练,训练时长:0.00秒
OCSVM算法正在训练,训练时长:2.34秒
OCSVM算法正在训练,训练时长:4.77秒
OCSVM算法正在训练,训练时长:5.32秒
OCSVM算法正在训练,训练时长:5.35秒
OCSVM算法正在训练,训练时长:6.18秒
OCSVM算法正在训练,训练时长:7.84秒
OCSVM算法正在训练,训练时长:8.27秒
OCSVM算法正在训练,训练时长:9.33秒
OCSVM算法正在训练,训练时长:9.60秒
OCSVM算法正在训练,训练时长:12.00秒
OCSVM算法正在训练,训练时长:14.31秒
OCSVM算法正在训练,训练时长:15.62秒
OCSVM算法正在训练,训练时长:16.47秒
OCSVM算法正在训练,训练时长:16.85秒
OCSVM算法正在训练,训练时长:17.16秒
OCSVM算法正在训练,训练时长:19.70秒
OCSVM算法正在训练,训练时长:20.43秒
OCSVM算法正在训练,训练时长:20.66秒
OCSVM算法正在训练,训练时长:21.01秒
OCSVM算法正在训练,训练时长:22.92秒
OCSVM算法正在训练,训练时长:24.12秒
OCSVM算法正在训练,训练时长:24.96秒
OCSVM算法正在训练,训练时长:25.63秒
OCSVM算法正在训练,训练时长:26.88秒
OCSVM算法正在训练,训练时长:27.00秒
OCSVM算法正在训练,训练时长:28.12秒
OCSVM算法正在训练,训练时长:28.43秒
OCSVM算法正在训练,训练时长:30.28秒
OCSVM算法正在训练,训练时长:32.33秒
OCSVM算法正在训练,训练时长:34.06秒
OCSVM算法正在训练,训练时长:36.52秒
OCSVM算法正在训练,训练时长:38.98秒
OCSVM算法正在训练,训练时长:39.21秒
OCSVM算法正在训练,训练时长:39.71秒
OCSVM算法正在训练,训练时长:42.49秒
OCSVM算法正在训练,训练时长:43.58秒
OCSVM算法正在训练,训练时长:44.71秒
OCSVM算法正在训练,训练时长:44.83秒
OCSVM算法正在训练,训练时长:46.16秒
OCSVM算法正在训练,训练时长:47.25秒
OCSVM算法正在训练,训练时长:47.54秒
OCSVM算法正在训练,训练时长:47.58秒
OCSVM算法正在训练,训练时长:47.75秒
OCSVM算法正在训练,训练时长:49.05秒
OCSVM算法正在训练,训练时长:49.34秒
OCSVM算法正在训练,训练时长:50.88秒
OCSVM算法正在训练,训练时长:53.53秒
OCSVM算法正在训练,训练时长:54.78秒
OCSVM算法正在训练,训练时长:57.68秒
OCSVM算法正在训练,训练时长:58.34秒
训练完成 训练用时59.8736629486084 Seconds 模型保存在: ***\167-20220622-165753
-------------本 文 结 束 啦 感 谢 您 的 阅 读-------------

欢迎关注我的其它发布渠道