TF-Faster-rcnn宠物狗种类识别之模型训练(终结篇)

一、前言

本篇将是“TF-Faster-rcnn宠物狗种类识别”系列文章的终结篇,包含的内容也会比较多,其中就有faster-rcnn模型代码框架分析、如何配置模型代码来训练自己的数据、训练过程中的问题分析及其解决方法以及模型验证方法和效果!先说一下我使用的faster-rcnn版本,其是smallcorgi基于rbg大神的faster-rcnn caffe版本的tensorflow实现,基于python 2.7语言,我是在tensorflow-gpu-1.4.0上实作的,目前来看其也只能在tensorflow gpu版本上实现,其github链接如下:

github.com/smallcorgi/F

下面我将一一呈现给大家,请耐心阅读!

二、faster-rcnn框架分析

在faster-rcnn根目录下执行tree -L 2得到如下代码树型结构:

data目录:放置的是demo数据,如果为了训练pascal voc数据集,则需要将训练数据集放置于此目录下,另外,预训练模型权重也是需要下载放置于此目录的,预训练模型权重主要是为了加快模型训练收敛速度!

experiments目录:训练或测试的shell脚本,同时包含有配置文件,我训练时就是用了faster_rcnn_end2end.sh,当然我们也可以不用,如果不用该脚本来训练则我们应该按照其配置文件faster_rcnn_end2end.yml的内容修改config.py文件;

lib目录:这里包含了训练所需的核心代码库,datasets是用于获取数据的,其中pascal_voc.py就是我们要特别要修改的。fast_rcnn则实现了faster_rcnn的fast rcnn网络,config.py配置文件就在其目录下,有定义了cfg=__C字典配置项,很多地方代码逻辑都会依赖该文件,这也是我们要在训练过程中调试超参数的地方,此外,train.py和test.py均在该目录下,真正sess.run所在;gt_data_layer则实现了准备和获取roidb minibatch的接口;networks则是整个神经网络的定义封装,其内定义了VGGnet网络;nms目录则是极大值抑制的具体实现,用于获取最佳边缘;roi_data_layer和roi_pooling_layer则是roi的具体实现操作,其将会获取一系列roi并与真值RPN比对超过阈值则认为所包含区域有效;rpn_msr则是rpn的设定,包括feat_stride 和anchor_scales,为了适应我们自己的训练图像大小,我们将有必要适当调整;utils则是一些通用库的实现,包括blob的获取等;

tools目录:这里有提供了运行faster-rcnn已训练模型来检测识别图像的方法,该目录包含了运行demo的脚本,以及训练和测试模型的代码,其中,faster_rcnn_end2end.sh就是调用了该目录下的train_net.py和test_net.py,当然,我们也可以直接使用,值得注意的是,我们运行demo.py时需要先到lib下执行make操作生成roi_pooling.so否则会报错!

faster-rcnn大体的框架即是如此,其中,根目录下会有README.md文件,建议看一下该文件,你会获取到很多有用信息!

三、配置faster-rcnn训练自己的数据

下载下来后,就算按照官方文档直接训练pascal voc的数据集,也是有问题的哦!所以需要我们适当配置一下!

第一步:

按照官方文档下载相应库`cython`, `python-opencv`, `easydict`,其实该文档有告诉我们如何测试和训练faster-rcnn,建议跑一遍官方demo训练一下,下载PASCAL VOC 2007数据集并链接到data目录(可选择),下载pre-trained ImageNet models(可选择),我都有将这两个数据分享于百度网盘,链接: pan.baidu.com/s/1dFOx6Q 密码: p1k1;

第二步:

根据运行时的错误,需要补充安装如下库,可能每个人依环境各异!

第三步:

如果你的数据集不是pascal voc等格式,而是自己生成的txt文档,那么一定要对接自己的数据到faster-rcnn,这一步非常关键,可谓成败在此一举!那么这一步的话,请挪步到我写的上一篇“ TF-Faster-rcnn宠物狗种类识别之对接自己的数据”,在这就不在展开!

第四步:

根据你所要分类的数目加上背景,修改VGGnet_train.py和VGGnet_test.py内的n_classes,这里因为宠物狗种类有100种所以一共有101数目,否则会报错tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [21] rhs shape= [101]

第五步:

修改train_net.py,配置默认的超参数,避免获取不到参数时跑偏了!

第六步:

原代码会在迭代5000次后保存一次模型,然而,在训练过程中没有任何验证或查看模型训练效果,那么,我们需要自己增加代码来观察模型训练效果!我主要在lib/fast_rcnn/train.py使用tf.summary模块来保存总的loss以及rpn loss、box loss等,这一步其实还是很有必要的,因为没有任何手段来观察调教模型,如果出了问题,再模型训练完后再察觉那就太晚了!git diff得到的代码修改如下:

diff --git a/lib/fast_rcnn/train.py b/lib/fast_rcnn/train.py
index d7633ee..b265ce7 100644
--- a/lib/fast_rcnn/train.py
+++ b/lib/fast_rcnn/train.py
@@ -147,6 +147,14 @@ class SolverWrapper(object):
         momentum = cfg.TRAIN.MOMENTUM
         train_op = tf.train.MomentumOptimizer(lr, momentum).minimize(loss, global_step=global_step)
 
+        train_writer = tf.summary.FileWriter('/home/xsr-ai/study/Faster-RCNN_TF/output/logs', sess.graph)
+        tf.summary.scalar("loss_total", loss)
+        tf.summary.scalar("rpn_loss_cls_value", rpn_cross_entropy)
+        tf.summary.scalar("rpn_loss_box_value", rpn_loss_box)
+        tf.summary.scalar("loss_cls_value", cross_entropy)
+        tf.summary.scalar("loss_box_value", loss_box)
+        summary_merged = tf.summary.merge_all()
+
         # iintialize variables
         sess.run(tf.global_variables_initializer())
         if self.pretrained_model is not None:
@@ -161,7 +169,7 @@ class SolverWrapper(object):
             blobs = data_layer.forward()
 
             # Make one SGD update
-            feed_dict={self.net.data: blobs['data'], self.net.im_info: blobs['im_info'], self.net.keep_prob: 0.5, \
+            feed_dict={self.net.data: blobs['data'], self.net.im_info: blobs['im_info'], self.net.keep_prob: 0.8, \
                            self.net.gt_boxes: blobs['gt_boxes']}
 
             run_options = None
@@ -172,7 +180,7 @@ class SolverWrapper(object):
 
             timer.tic()
 
-            rpn_loss_cls_value, rpn_loss_box_value,loss_cls_value, loss_box_value, _ = sess.run([rpn_cross_entropy, rpn_loss_box, cross_entropy, loss_box, train_op],
+            merged, rpn_loss_cls_value, rpn_loss_box_value,loss_cls_value, loss_box_value, _ = sess.run([summary_merged, rpn_cross_entropy, rpn_loss_box, cross_entropy, loss_box, train_op],
                                                                                                 feed_dict=feed_dict,
                                                                                                 options=run_options,
                                                                                                 run_metadata=run_metadata)
@@ -194,6 +202,11 @@ class SolverWrapper(object):
                 last_snapshot_iter = iter
                 self.snapshot(sess, iter)
 
+            if iter % 200 == 0:
+                train_writer.add_summary(merged, iter)
+
+        train_writer.close()
+
         if last_snapshot_iter != iter:
             self.snapshot(sess, iter)

第七步:

修改dropout概率,源代码设定dropout到0.5,那是因为他的种类只有21种且图片偏多,而我的数据量有100个类且处理后数据差不多10万个RPN,所以特征参数还是蛮多的,没有必要dropout那么多,需要模型记住一些特征参数,因此,我修改了keep_prob为0.8!

那么,所有的源代码配置更改,我使用git diff生成如下,如果你想参照,可以使用git apply到你的代码,如下:

diff --git a/experiments/scripts/faster_rcnn_end2end.sh b/experiments/scripts/faster_rcnn_end2end.sh
index be09f43..8409c12 100755
--- a/experiments/scripts/faster_rcnn_end2end.sh
+++ b/experiments/scripts/faster_rcnn_end2end.sh
@@ -27,7 +27,7 @@ case $DATASET in
     TRAIN_IMDB="voc_2007_trainval"
     TEST_IMDB="voc_2007_test"
     PT_DIR="pascal_voc"
-    ITERS=70000
+    ITERS=40000
     ;;
   coco)
     # This is a very long and slow training schedule
diff --git a/lib/datasets/pascal_voc.py b/lib/datasets/pascal_voc.py
index a00f43d..3b05b15 100644
--- a/lib/datasets/pascal_voc.py
+++ b/lib/datasets/pascal_voc.py
@@ -26,18 +26,58 @@ class pascal_voc(imdb):
         imdb.__init__(self, 'voc_' + year + '_' + image_set)
         self._year = year
         self._image_set = image_set
-        self._devkit_path = self._get_default_path() if devkit_path is None \
-                            else devkit_path
-        self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year)
+        self._devkit_path = "/home/xsr-ai/Desktop/DetectDogs/enhance"
+        self._data_path = os.path.join(self._devkit_path, 'train')
         self._classes = ('__background__', # always index 0
-                         'aeroplane', 'bicycle', 'bird', 'boat',
-                         'bottle', 'bus', 'car', 'cat', 'chair',
-                         'cow', 'diningtable', 'dog', 'horse',
-                         'motorbike', 'person', 'pottedplant',
-                         'sheep', 'sofa', 'train', 'tvmonitor')
+                         'dog-1', 'dog-2', 'dog-3', 'dog-4', 'dog-5',
+                         'dog-6', 'dog-7', 'dog-8', 'dog-9', 'dog-10',
+                         'dog-11', 'dog-12', 'dog-13', 'dog-14', 'dog-15',
+                         'dog-16', 'dog-17', 'dog-18', 'dog-19', 'dog-20',
+                         'dog-21', 'dog-22', 'dog-23', 'dog-24', 'dog-25',
+                         'dog-26', 'dog-27', 'dog-28', 'dog-29', 'dog-30',
+                         'dog-31', 'dog-32', 'dog-33', 'dog-34', 'dog-35',
+                         'dog-36', 'dog-37', 'dog-38', 'dog-39', 'dog-40',
+                         'dog-41', 'dog-42', 'dog-43', 'dog-44', 'dog-45',
+                         'dog-46', 'dog-47', 'dog-48', 'dog-49', 'dog-50',
+                         'dog-51', 'dog-52', 'dog-53', 'dog-54', 'dog-55',
+                         'dog-56', 'dog-57', 'dog-58', 'dog-59', 'dog-60',
+                         'dog-61', 'dog-62', 'dog-63', 'dog-64', 'dog-65',
+                         'dog-66', 'dog-67', 'dog-68', 'dog-69', 'dog-70',
+                         'dog-71', 'dog-72', 'dog-73', 'dog-74', 'dog-75',
+                         'dog-76', 'dog-77', 'dog-78', 'dog-79', 'dog-80',
+                         'dog-81', 'dog-82', 'dog-83', 'dog-84', 'dog-85',
+                         'dog-86', 'dog-87', 'dog-88', 'dog-89', 'dog-90',
+                         'dog-91', 'dog-92', 'dog-93', 'dog-94', 'dog-95',
+                         'dog-96', 'dog-97', 'dog-98', 'dog-99', 'dog-100'
+                         )
+        self.mapping_classes = ('__background__', # always index 0
+                                '0', '1', '2', '3', '4',
+                                '5', '6', '7', '8', '9',
+                                '10', '11', '12', '13', '14',
+                                '16', '17', '18', '19', '20',
+                                '21', '22', '23', '24', '25',
+                                '26', '27', '28', '29', '30',
+                                '31', '32', '33', '34', '35',
+                                '36', '37', '38', '39', '40',
+                                '41', '42', '43', '45', '46',
+                                '47', '48', '49', '50', '51',
+                                '52', '53', '54', '57', '59',
+                                '60', '61', '62', '63', '64',
+                                '65', '66', '67', '68', '69',
+                                '70', '71', '72', '73', '74',
+                                '75', '76', '77', '78', '79',
+                                '80', '81', '82', '83', '84',
+                                '85', '86', '87', '88', '94',
+                                '95', '97', '101', '109', '111',
+                                '114', '115', '120', '123', '126',
+                                '127', '128', '129', '132', '133'
+                                )
         self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))
+        self._mapping_class_to_ind = dict(zip(self.mapping_classes, xrange(self.num_classes)))
         self._image_ext = '.jpg'
-        self._image_index = self._load_image_set_index()
+        self._linesinfo = list()
+        self._im_id = self._load_image_set_index()
+        self._image_index = self._im_id
         # Default to roidb handler
         #self._roidb_handler = self.selective_search_roidb
         self._roidb_handler = self.gt_roidb
@@ -67,11 +107,10 @@ class pascal_voc(imdb):
         """
         Construct an image path from the image's "index" identifier.
         """
-        image_path = os.path.join(self._data_path, 'JPEGImages',
-                                  index + self._image_ext)
-        assert os.path.exists(image_path), \
-                'Path does not exist: {}'.format(image_path)
-        return image_path
+
+        im_path = self._linesinfo[index].split(" ")[0]
+
+        return im_path
 
     def _load_image_set_index(self):
         """
@@ -79,12 +118,17 @@ class pascal_voc(imdb):
         """
         # Example path to image set file:
         # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt
-        image_set_file = os.path.join(self._data_path, 'ImageSets', 'Main',
-                                      self._image_set + '.txt')
+        image_set_file = "/home/xsr-ai/Desktop/DetectDogs/enhance/train_lable_rpn.txt"
+
         assert os.path.exists(image_set_file), \
                 'Path does not exist: {}'.format(image_set_file)
-        with open(image_set_file) as f:
-            image_index = [x.strip() for x in f.readlines()]
+
+        f = open(image_set_file, "r")
+        self._linesinfo = f.readlines()
+        index = len(self._linesinfo)
+        image_index = [x for x in range(index)]
+        f.close()
+
         return image_index
 
     def _get_default_path(self):
@@ -106,8 +150,7 @@ class pascal_voc(imdb):
             print '{} gt roidb loaded from {}'.format(self.name, cache_file)
             return roidb
 
-        gt_roidb = [self._load_pascal_annotation(index)
-                    for index in self.image_index]
+        gt_roidb = self._load_pascal_txt()
         with open(cache_file, 'wb') as fid:
             cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL)
         print 'wrote gt roidb to {}'.format(cache_file)
@@ -226,6 +269,47 @@ class pascal_voc(imdb):
                 'flipped' : False,
                 'seg_areas' : seg_areas}
 
+    def _load_pascal_txt(self):
+        """
+        Load image and bounding boxes info from txt file in the personal format.
+        """
+        gt_roidb = list()
+
+        filename = "/home/xsr-ai/Desktop/DetectDogs/enhance/train_lable_rpn.txt"
+
+        hd = open(filename, "r")
+        for line in hd.readlines():
+            lineinfo = line.split(" ")
+
+            num_objs = int(lineinfo[2])
+            objs = range(num_objs)
+            boxes = np.zeros((num_objs, 4), dtype=np.uint16)
+            gt_classes = np.zeros((num_objs), dtype=np.int32)
+            overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)
+            # "Seg" area for pascal is just the box area
+            seg_areas = np.zeros((num_objs), dtype=np.float32)
+            cls = self._mapping_class_to_ind[lineinfo[1]]  # 1 col is class, fixed not dynamic
+
+            for ix, obj in enumerate(objs):
+                bbox = lineinfo[3+obj] # roi begin at 3 col
+                bbox = bbox.split(",")
+                x1,y1,x2,y2 = [float(pos)-1 if int(pos) > 1 else float(pos) for pos in bbox]
+                boxes[ix, :] = [x1, y1, x2, y2]
+                gt_classes[ix] = cls
+                overlaps[ix, cls] = 1.0
+                seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)
+
+            overlaps = scipy.sparse.csr_matrix(overlaps)
+            gt_roidb_dict = {'boxes' : boxes,
+                             'gt_classes': gt_classes,
+                             'gt_overlaps' : overlaps,
+                             'flipped' : False,
+                             'seg_areas' : seg_areas}
+            gt_roidb.append(gt_roidb_dict)
+        hd.close()
+
+        return gt_roidb
+
     def _get_comp_id(self):
         comp_id = (self._comp_id + '_' + self._salt if self.config['use_salt']
             else self._comp_id)
diff --git a/lib/fast_rcnn/config.py b/lib/fast_rcnn/config.py
index 12210d1..b2a2528 100644
--- a/lib/fast_rcnn/config.py
+++ b/lib/fast_rcnn/config.py
@@ -60,10 +60,10 @@ __C.IS_MULTISCALE = False
 __C.TRAIN.SCALES = (600,)
 
 # Max pixel size of the longest side of a scaled input image
-__C.TRAIN.MAX_SIZE = 1000
+__C.TRAIN.MAX_SIZE = 800
 
 # Images to use per minibatch
-__C.TRAIN.IMS_PER_BATCH = 2
+__C.TRAIN.IMS_PER_BATCH = 1
 
 # Minibatch size (number of regions of interest [ROIs])
 __C.TRAIN.BATCH_SIZE = 128
@@ -72,11 +72,11 @@ __C.TRAIN.BATCH_SIZE = 128
 __C.TRAIN.FG_FRACTION = 0.25
 
 # Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)
-__C.TRAIN.FG_THRESH = 0.5
+__C.TRAIN.FG_THRESH = 0.6
 
 # Overlap threshold for a ROI to be considered background (class = 0 if
 # overlap in [LO, HI))
-__C.TRAIN.BG_THRESH_HI = 0.5
+__C.TRAIN.BG_THRESH_HI = 0.6
 __C.TRAIN.BG_THRESH_LO = 0.1
 
 # Use horizontally-flipped images during training?
@@ -87,7 +87,7 @@ __C.TRAIN.BBOX_REG = True
 
 # Overlap required between a ROI and ground-truth box in order for that ROI to
 # be used as a bounding-box regression training example
-__C.TRAIN.BBOX_THRESH = 0.5
+__C.TRAIN.BBOX_THRESH = 0.6
 
 # Iterations between snapshots
 __C.TRAIN.SNAPSHOT_ITERS = 5000
@@ -107,7 +107,7 @@ __C.TRAIN.BBOX_NORMALIZE_TARGETS = True
 __C.TRAIN.BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
 # Normalize the targets using "precomputed" (or made up) means and stdevs
 # (BBOX_NORMALIZE_TARGETS must also be True)
-__C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = False
+__C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = True
 __C.TRAIN.BBOX_NORMALIZE_MEANS = (0.0, 0.0, 0.0, 0.0)
 __C.TRAIN.BBOX_NORMALIZE_STDS = (0.1, 0.1, 0.2, 0.2)
 
@@ -120,9 +120,9 @@ __C.TRAIN.PROPOSAL_METHOD = 'selective_search'
 __C.TRAIN.ASPECT_GROUPING = True
 
 # Use RPN to detect objects
-__C.TRAIN.HAS_RPN = False
+__C.TRAIN.HAS_RPN = True
 # IOU >= thresh: positive example
-__C.TRAIN.RPN_POSITIVE_OVERLAP = 0.7
+__C.TRAIN.RPN_POSITIVE_OVERLAP = 0.6
 # IOU < thresh: negative example
 __C.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3
 # If an anchor statisfied by positive and negative conditions set to negative
@@ -160,7 +160,7 @@ __C.TEST = edict()
 __C.TEST.SCALES = (600,)
 
 # Max pixel size of the longest side of a scaled input image
-__C.TEST.MAX_SIZE = 1000
+__C.TEST.MAX_SIZE = 800
 
 # Overlap threshold used for non-maximum suppression (suppress boxes with
 # IoU >= this threshold)
diff --git a/lib/fast_rcnn/train.py b/lib/fast_rcnn/train.py
index d7633ee..b265ce7 100644
--- a/lib/fast_rcnn/train.py
+++ b/lib/fast_rcnn/train.py
@@ -147,6 +147,14 @@ class SolverWrapper(object):
         momentum = cfg.TRAIN.MOMENTUM
         train_op = tf.train.MomentumOptimizer(lr, momentum).minimize(loss, global_step=global_step)
 
+        train_writer = tf.summary.FileWriter('/home/xsr-ai/study/Faster-RCNN_TF/output/logs', sess.graph)
+        tf.summary.scalar("loss_total", loss)
+        tf.summary.scalar("rpn_loss_cls_value", rpn_cross_entropy)
+        tf.summary.scalar("rpn_loss_box_value", rpn_loss_box)
+        tf.summary.scalar("loss_cls_value", cross_entropy)
+        tf.summary.scalar("loss_box_value", loss_box)
+        summary_merged = tf.summary.merge_all()
+
         # iintialize variables
         sess.run(tf.global_variables_initializer())
         if self.pretrained_model is not None:
@@ -161,7 +169,7 @@ class SolverWrapper(object):
             blobs = data_layer.forward()
 
             # Make one SGD update
-            feed_dict={self.net.data: blobs['data'], self.net.im_info: blobs['im_info'], self.net.keep_prob: 0.5, \
+            feed_dict={self.net.data: blobs['data'], self.net.im_info: blobs['im_info'], self.net.keep_prob: 0.8, \
                            self.net.gt_boxes: blobs['gt_boxes']}
 
             run_options = None
@@ -172,7 +180,7 @@ class SolverWrapper(object):
 
             timer.tic()
 
-            rpn_loss_cls_value, rpn_loss_box_value,loss_cls_value, loss_box_value, _ = sess.run([rpn_cross_entropy, rpn_loss_box, cross_entropy, loss_box, train_op],
+            merged, rpn_loss_cls_value, rpn_loss_box_value,loss_cls_value, loss_box_value, _ = sess.run([summary_merged, rpn_cross_entropy, rpn_loss_box, cross_entropy, loss_box, train_op],
                                                                                                 feed_dict=feed_dict,
                                                                                                 options=run_options,
                                                                                                 run_metadata=run_metadata)
@@ -194,6 +202,11 @@ class SolverWrapper(object):
                 last_snapshot_iter = iter
                 self.snapshot(sess, iter)
 
+            if iter % 200 == 0:
+                train_writer.add_summary(merged, iter)
+
+        train_writer.close()
+
         if last_snapshot_iter != iter:
             self.snapshot(sess, iter)
 
@@ -258,7 +271,9 @@ def train_net(network, imdb, roidb, output_dir, pretrained_model=None, max_iters
     """Train a Fast R-CNN network."""
     roidb = filter_roidb(roidb)
     saver = tf.train.Saver(max_to_keep=100)
-    with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
+    config = tf.ConfigProto(allow_soft_placement=True)
+    config.gpu_options.allow_growth = True
+    with tf.Session(config=config) as sess:
         sw = SolverWrapper(sess, saver, network, imdb, roidb, output_dir, pretrained_model=pretrained_model)
         print 'Solving...'
         sw.train_model(sess, max_iters)
diff --git a/lib/make.sh b/lib/make.sh
index 15a616b..98cb507 100755
--- a/lib/make.sh
+++ b/lib/make.sh
@@ -1,4 +1,5 @@
 TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
+TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
 
 CUDA_PATH=/usr/local/cuda/
 CXXFLAGS=''
@@ -11,12 +12,12 @@ cd roi_pooling_layer
 
 if [ -d "$CUDA_PATH" ]; then
 	nvcc -std=c++11 -c -o roi_pooling_op.cu.o roi_pooling_op_gpu.cu.cc \
-		-I $TF_INC -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC $CXXFLAGS \
-		-arch=sm_37
+		-I $TF_INC -D GOOGLE_CUDA=1 -D_GLIBCXX_USE_CXX11_ABI=0 -x cu -Xcompiler -fPIC $CXXFLAGS \
+		-arch=sm_61
 
 	g++ -std=c++11 -shared -o roi_pooling.so roi_pooling_op.cc \
-		roi_pooling_op.cu.o -I $TF_INC  -D GOOGLE_CUDA=1 -fPIC $CXXFLAGS \
-		-lcudart -L $CUDA_PATH/lib64
+		roi_pooling_op.cu.o -I $TF_INC -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -fPIC $CXXFLAGS \
+		-lcudart -L $CUDA_PATH/lib64 -L$TF_LIB -ltensorflow_framework
 else
 	g++ -std=c++11 -shared -o roi_pooling.so roi_pooling_op.cc \
 		-I $TF_INC -fPIC $CXXFLAGS
diff --git a/lib/networks/VGGnet_test.py b/lib/networks/VGGnet_test.py
index bdceca1..204cb4e 100644
--- a/lib/networks/VGGnet_test.py
+++ b/lib/networks/VGGnet_test.py
@@ -1,9 +1,9 @@
 import tensorflow as tf
 from networks.network import Network
 
-n_classes = 21
+n_classes = 101
 _feat_stride = [16,]
-anchor_scales = [8, 16, 32] 
+anchor_scales = [4, 8, 16, 32]
 
 class VGGnet_test(Network):
     def __init__(self, trainable=True):
diff --git a/lib/networks/VGGnet_train.py b/lib/networks/VGGnet_train.py
index e137f0b..a7e36a7 100644
--- a/lib/networks/VGGnet_train.py
+++ b/lib/networks/VGGnet_train.py
@@ -4,9 +4,9 @@ from networks.network import Network
 
 #define
 
-n_classes = 21
+n_classes = 101
 _feat_stride = [16,]
-anchor_scales = [8, 16, 32]
+anchor_scales = [4, 8, 16, 32]
 
 class VGGnet_train(Network):
     def __init__(self, trainable=True):
diff --git a/lib/rpn_msr/generate_anchors.py b/lib/rpn_msr/generate_anchors.py
index 1125a80..b113112 100644
--- a/lib/rpn_msr/generate_anchors.py
+++ b/lib/rpn_msr/generate_anchors.py
@@ -35,7 +35,7 @@ import numpy as np
 #       [-167., -343.,  184.,  360.]])
 
 def generate_anchors(base_size=16, ratios=[0.5, 1, 2],
-                     scales=2**np.arange(3, 6)):
+                     scales=2**np.arange(2, 6)):
     """
     Generate anchor (reference) windows by enumerating aspect ratios X
     scales wrt a reference (0, 0, 15, 15) window.
diff --git a/lib/rpn_msr/proposal_layer_tf.py b/lib/rpn_msr/proposal_layer_tf.py
index 1398409..7afc69c 100644
--- a/lib/rpn_msr/proposal_layer_tf.py
+++ b/lib/rpn_msr/proposal_layer_tf.py
@@ -19,7 +19,7 @@ DEBUG = False
 Outputs object detection proposals by applying estimated bounding-box
 transformations to a set of regular boxes (called "anchors").
 """
-def proposal_layer(rpn_cls_prob_reshape,rpn_bbox_pred,im_info,cfg_key,_feat_stride = [16,],anchor_scales = [8, 16, 32]):
+def proposal_layer(rpn_cls_prob_reshape,rpn_bbox_pred,im_info,cfg_key,_feat_stride = [16,],anchor_scales = [4, 8, 16, 32]):
     # Algorithm:
     #
     # for each (H, W) location i
diff --git a/tools/demo.py b/tools/demo.py
index 0ffd219..235d48c 100644
--- a/tools/demo.py
+++ b/tools/demo.py
@@ -31,6 +31,11 @@ def vis_detections(im, class_name, dets,ax, thresh=0.5):
         bbox = dets[i, :4]
         score = dets[i, -1]
 
+        print(bbox[0],)
+        print(bbox[1],)
+        print(bbox[2],)
+        print(bbox[3])
+
         ax.add_patch(
             plt.Rectangle((bbox[0], bbox[1]),
                           bbox[2] - bbox[0],
diff --git a/tools/train_net.py b/tools/train_net.py
index d225ed5..3a19509 100755
--- a/tools/train_net.py
+++ b/tools/train_net.py
@@ -26,7 +26,7 @@ def parse_args():
     """
     parser = argparse.ArgumentParser(description='Train a Fast R-CNN network')
     parser.add_argument('--device', dest='device', help='device to use',
-                        default='cpu', type=str)
+                        default='gpu', type=str)
     parser.add_argument('--device_id', dest='device_id', help='device id to use',
                         default=0, type=int)
     parser.add_argument('--solver', dest='solver',
@@ -34,22 +34,22 @@ def parse_args():
                         default=None, type=str)
     parser.add_argument('--iters', dest='max_iters',
                         help='number of iterations to train',
-                        default=70000, type=int)
+                        default=40000, type=int)
     parser.add_argument('--weights', dest='pretrained_model',
                         help='initialize with pretrained model weights',
-                        default=None, type=str)
+                        default="data/pretrain_model/VGG_imagenet.npy", type=str)
     parser.add_argument('--cfg', dest='cfg_file',
                         help='optional config file',
                         default=None, type=str)
     parser.add_argument('--imdb', dest='imdb_name',
                         help='dataset to train on',
-                        default='kitti_train', type=str)
+                        default='voc_2007_trainval', type=str)
     parser.add_argument('--rand', dest='randomize',
                         help='randomize (do not use a fixed seed)',
                         action='store_true')
     parser.add_argument('--network', dest='network_name',
                         help='name of the network',
-                        default=None, type=str)
+                        default="VGGnet_train", type=str)
     parser.add_argument('--set', dest='set_cfgs',
                         help='set config keys', default=None,
                         nargs=argparse.REMAINDER)

四、训练问题分析及其解决方法

第一个问题:训练过程中出现out of memory

这个问题是tensorflow运行时霸占了所有GPU显存了,这样会导致需要GPU显存时不足所致,为了避免在训练过程中出现该问题,我们需要修改tensorflow session的配置,让其根据需要时增长使用GPU显存,在train.py接口train_net按照如下修改:

第二个问题:RuntimeWarning: invalid value encountered in divide

这个一定要检查对接数据接口是否正确,一张图片索引对应到一个roidb元素,此外,需要删除data目录下的cache重新生成roidb数据,不然你修改了数据但还是引用上次的!

第三个问题:Invalid argument: Assign requires shapes of both tensors to match

不管是train还是test都需要根据分类数目去修改VGGnet_train.py和VGGnet_test.py内的n_classes,否则会报错!

第四个问题:roidb数据flipped时出现溢出错误

这个是因为标注框的坐标数据有非正常值,即小于0变成65535大的负数,需要我们保证标注框的坐标x1,y1,x2,y2均在图像有效范围内,且x1<x2&&y1<y2,我是使用如下代码保证的这个合法性检查的:

第五个问题:tensorflow.python.framework.errors_impl.NotFoundError

这是在编译roi_pooling.so时发现的问题,主要是因为tensorflow版本更迭架构变了,需要做些修改,报错信息如下:

lib/make.sh增添修改如下:

第六个问题:loss为nan问题

这个是因为动了learning rate所致的,就是学习率调太高了,原本是0.001,我调到0.1或0.05即会出现该问题,所以模型的learning rate要适当才行,我这个调学习率也是想看看损失值的波动是否因learning rate设定不合适所致!

第七个问题:训练过程损失函数total loss其值一直在1.0左右上下剧烈波动无法收敛

这个问题就比较棘手了,我也是分析了很久,主要是每运行一次模型要等到数据分析有效才可以正确分析,所以耗的时间比较长!下面是我生成的logs在tensorboard上的直观体现:

可以看到loss_total和loss_cls_value一直在1.0附件波动无法收敛,很明显,要么模型有问题,要么训练数据有问题,而训练数据有问题最终又归因于模型有问题,那是因为模型参数不对或者模型优化有问题,所以最终问题变为模型调参和优化的问题!

为了解决loss不收敛的问题,我分析了我的数据,有发现大部分图片大小不一,目标区域有很大的也有很小的,虽然模型有做归一化操作,但是我还是使用tf.image.resize_image_with_crop_or_pad到600x600,然后再重新获取真值RPN标注框,但是效果居然更差了,不仅获取到的标注框数量减少,同时训练的损失函数更是波动得更厉害!所以,我去调了config.py内的__C.TRAIN.SCALES和__C.TRAIN.MAX_SIZE各种组合均不行,降低__C.TRAIN.RPN_POSITIVE_OVERLAP到0.6也不行,还调了anchors_scale的数据也不行,无奈,此方法不通!

数据还是使用数据增强的那一份,不在做resize到统一尺寸,训练过程中,我调了learning rate,发现太大就会导致nan问题,所以学习率不能太大,另外,我使用了pretrain model来初始化模型的weights参数,但是问题依然!

最后,分析到,模型会统一将图片缩放到1000x600大小,所以应该是缩放后导致RPN网络与ROI叠加部分因图片缩放而有所降低,所以可能会导致一张图片里检索不到任何目标,也就无法学习到任何东西了,另外,图像目标区域有很小的也有很大的,经过这么一缩放,就会导致目标区域要么很小要么很大,导致anchors在feature层也无法检测到完整的RPN。所以,我尝试将anchors_scale增加了一个scale为4的值(这里修改的地方有点多,请看我贴的git diff),另外修改模型缩放图像到800x600,此外还修改了__C.TRAIN.RPN_POSITIVE_OVERLAP为0.6,因为我们的标注框是自动生成的,并不会完全覆盖整个目标区域,另外,config.py还做了一些修改,也请一并看git diff文件差异!

这样一修改后,再次训练,虽然loss还是有波动,但是已经大大收敛了,已经可以达到0.1附件了。那么loss为何会一直波动像阻尼运动一样慢慢缩小范围内呢?其实,我分析认为,我的数据量5万张,有100个类别,一开始训练的时候,可能只训练了其中20个类别,然后突然进来剩下类别的图像,或者某个类别只送进了几张而其他已经送进很多,突然又再次送进这个很少训练的类别,这样模型就会波动很大了!另外,因为我的标注框不是人工标注的,自动生成脚本可不会帮我们区分一张图片是否拥有两只类型不一样的狗,只要检测到狗其均会生成标注框且默认指定该标注框的类别为图像类别,这样就可能导致一张图片拥有不同类别的狗,然而我们的标签却指定为同一个,这样就相当于存在错误标签,那么模型存在波动也是有据可依了!这从训练越久,loss波动范围越小可以看出来!

调参真的是一个苦力活,耗时太长了,不过,我还是成功了,100个宠物狗识别检测分类的损失函数可以收敛了,可喜可贺啊!

五、模型验证方法及其效果

模型如何训练呢,我看模型是有提供test_net.py的,但是还需要修改不少东西,我想自己用demo来验证就好了,当然也可以使用train那套方法来改代码!那怎么做呢?

训练完模型后,会在faster-rcnn/output/faster_rcnn_end2end/voc_2007_trainval下生成一系列检查点文件,我设定的迭代次数为40000次,每5000次会调用snapshot接口生成一个tensorflow checkpoint文件,那么我们就是要用最后一个ckpt文件来验证模型的效果!

我的做法是,在demo.py内修改CLASS类别与训练时一致包括背景有101类,然后设定一个loop去获取测试图片及其类别,一个一个的送进模型获取检测结果,我设定的阈值只有0.6,即检测置信度为大于0.6则认为检测到目标,如果有检测结果,那么记下正确的数据量,如果检测到错误目标或者检查不出目标(这是很有可能的,就算原模型迭代了70000次在我的数据集里测试也是有非常多的图片检测不出宠物狗,则记下不正确的数据量!最后算出它们各自在总的测试数据里所占的比率,这样我们就得到模型的准确率怎么样了!虽然这样的方式准确性并非百分百,也存在误差,但基本可以反应训练出模型的效果了!另外,有发现当调了VGGnet_test.py里的_feat_stride大小会导致不一样的效果,这是因为图片大小太不一致所致的,还是要人工裁剪一下图片并标注目标区域才是最可靠的。当然,测试数据和训练数据也有相关性,因为我们是先增强数据集然后再分train、val及test数据集的,要是有完全不相关的训练和测试数据集是最好的,但是没办法,数据量太少,先这样看看效果吧!

话不多说,上代码:

import _init_paths
import tensorflow as tf
from fast_rcnn.config import cfg
from fast_rcnn.test import im_detect
from fast_rcnn.nms_wrapper import nms
from utils.timer import Timer
import matplotlib.pyplot as plt
import numpy as np
import os, sys, cv2
import argparse
from networks.factory import get_network

CLASSES = ('__background__',
           '0', '1', '2', '3', '4',
            '5', '6', '7', '8', '9',
            '10', '11', '12', '13', '14',
            '16', '17', '18', '19', '20',
            '21', '22', '23', '24', '25',
            '26', '27', '28', '29', '30',
            '31', '32', '33', '34', '35',
            '36', '37', '38', '39', '40',
            '41', '42', '43', '45', '46',
            '47', '48', '49', '50', '51',
            '52', '53', '54', '57', '59',
            '60', '61', '62', '63', '64',
            '65', '66', '67', '68', '69',
            '70', '71', '72', '73', '74',
            '75', '76', '77', '78', '79',
            '80', '81', '82', '83', '84',
            '85', '86', '87', '88', '94',
            '95', '97', '101', '109', '111',
            '114', '115', '120', '123', '126',
            '127', '128', '129', '132', '133'
            )
mapping_class_to_ind = dict(zip(CLASSES, xrange(len(CLASSES))))

def demo(sess, net, image_name, cls_dog):
    """Detect object classes in an image using pre-computed object proposals."""

    im = cv2.imread(image_name)

    # Detect all object classes and regress object bounds
    timer = Timer()
    timer.tic()
    scores, boxes = im_detect(sess, net, im)
    timer.toc()
    print ('Detection took {:.3f}s for '
           '{:d} object proposals').format(timer.total_time, boxes.shape[0])

    cls_ind = mapping_class_to_ind[cls_dog]

    detect_num = 0
    undetect_num = 0

    CONF_THRESH = 0.6
    NMS_THRESH = 0.3

    cls_boxes = boxes[:, 4*cls_ind:4*(cls_ind + 1)]
    cls_scores = scores[:, cls_ind]
    dets = np.hstack((cls_boxes,
                      cls_scores[:, np.newaxis])).astype(np.float32)
    keep = nms(dets, NMS_THRESH)
    dets = dets[keep, :]
    inds = np.where(dets[:, -1] >= CONF_THRESH)[0]

    if len(inds) == 0:
        undetect_num = 1
    else:
        detect_num = 1

    return detect_num,undetect_num

def parse_args():
    """Parse input arguments."""
    parser = argparse.ArgumentParser(description='Faster R-CNN demo')
    parser.add_argument('--gpu', dest='gpu_id', help='GPU device id to use [0]',
                        default=0, type=int)
    parser.add_argument('--cpu', dest='cpu_mode',
                        help='Use CPU mode (overrides --gpu)',
                        action='store_true')
    parser.add_argument('--net', dest='demo_net', help='Network to use [vgg16]',
                        default='VGGnet_test')
    parser.add_argument('--model', dest='model', help='Model path',
                        default=' ')

    args = parser.parse_args()

    return args

if __name__ == '__main__':
    cfg.TEST.HAS_RPN = True  # Use RPN for proposals

    args = parse_args()

    if args.model == ' ':
        raise IOError(('Error: Model not found.\n'))
        
    # init session
    sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
    # load network
    net = get_network(args.demo_net)
    # load model
    saver = tf.train.Saver(write_version=tf.train.SaverDef.V1)
    saver.restore(sess, args.model)
   
    #sess.run(tf.initialize_all_variables())

    print '\n\nLoaded network {:s}'.format(args.model)

    # Warmup on a dummy image
    im = 128 * np.ones((300, 300, 3), dtype=np.uint8)
    for i in xrange(2):
        _, _= im_detect(sess, net, im)

    test_path = "/home/xsr-ai/Desktop/DetectDogs/enhance/test_lable.txt"
    hd = open(test_path, "r")
    lines = hd.readlines()
    im_total = len(lines)

    detect_total = 0
    undetect_total = 0

    for line in lines:
        im_name = line.split(" ")[0]
        cls_name = line.split(" ")[1]
        cls_name = cls_name.strip('\n')
        print("test %s" % im_name)
        detect_ok, detect_ng = demo(sess, net, im_name, cls_name)

        detect_total += detect_ok
        undetect_total += detect_ng

    p_ok = float(detect_total) / float(im_total)
    p_ng = float(undetect_total) / float(im_total)

    hd.close()

    print("correct detect 100 of dos is %.2f" % p_ok)
    print("uncorrect detect 100 of dos is %.2f" % p_ng)

设定_feat_stride为16,再巴拉拉的运行一段时间后,最终打印出来的准确率只有56%!嘿嘿,是不是不够理想呢,没错了,可毕竟100个分类而且数据那么少,还没有人工预处理图像,标注框都是我写代码自动生成的,所以,也算马马虎虎吧!

聚圣源靓号芯和欣哪个起名好巴尔扎克与小裁缝五行属金的男生起名常用字库soyo主板配送公司起名窗帘店起什么名字好动感之星妖精沧州起名行哪个是比较好谎言2014两个人的世界电视剧卖五金的公司起名摩托车报价大全河道治理实施方案鬼怪13集韩国服装店铺起名艺名起名网免费取名男起名字 英语全屋定制家具起名江西卫视节目表rtv起名上海女宝宝起名姜姓起名黑白禁区淦小暖是谁的女儿起名表示健康平安超市和便利起名钢琴调音都市无敌战神林北四川省委书记明起名字好听淀粉肠小王子日销售额涨超10倍罗斯否认插足凯特王妃婚姻让美丽中国“从细节出发”清明节放假3天调休1天男孩疑遭霸凌 家长讨说法被踢出群国产伟哥去年销售近13亿网友建议重庆地铁不准乘客携带菜筐雅江山火三名扑火人员牺牲系谣言代拍被何赛飞拿着魔杖追着打月嫂回应掌掴婴儿是在赶虫子山西高速一大巴发生事故 已致13死高中生被打伤下体休学 邯郸通报李梦为奥运任务婉拒WNBA邀请19岁小伙救下5人后溺亡 多方发声王树国3次鞠躬告别西交大师生单亲妈妈陷入热恋 14岁儿子报警315晚会后胖东来又人满为患了倪萍分享减重40斤方法王楚钦登顶三项第一今日春分两大学生合买彩票中奖一人不认账张家界的山上“长”满了韩国人?周杰伦一审败诉网易房客欠租失踪 房东直发愁男子持台球杆殴打2名女店员被抓男子被猫抓伤后确诊“猫抓病”“重生之我在北大当嫡校长”槽头肉企业被曝光前生意红火男孩8年未见母亲被告知被遗忘恒大被罚41.75亿到底怎么缴网友洛杉矶偶遇贾玲杨倩无缘巴黎奥运张立群任西安交通大学校长黑马情侣提车了西双版纳热带植物园回应蜉蝣大爆发妈妈回应孩子在校撞护栏坠楼考生莫言也上北大硕士复试名单了韩国首次吊销离岗医生执照奥巴马现身唐宁街 黑色着装引猜测沈阳一轿车冲入人行道致3死2伤阿根廷将发行1万与2万面值的纸币外国人感慨凌晨的中国很安全男子被流浪猫绊倒 投喂者赔24万手机成瘾是影响睡眠质量重要因素春分“立蛋”成功率更高?胖东来员工每周单休无小长假“开封王婆”爆火:促成四五十对专家建议不必谈骨泥色变浙江一高校内汽车冲撞行人 多人受伤许家印被限制高消费

聚圣源 XML地图 TXT地图 虚拟主机 SEO 网站制作 网站优化