[论文分享]Channel Pruning via Automatic Structure Search
阅读原文时间:2023年07月10日阅读:4

0、Abstract

In this paper, we propose a new channel pruning method based on artificial bee colony algorithm (ABC), dubbed as ABCPruner, which aims to efficiently find optimal pruned structure, i.e., channel number in each layer, rather than selecting “important” channels as previous works did.

一开头提到新的剪枝方法:ABCPruner(artificial bee colony algorithm)【channel level】,表明自己的方法旨在找出最优剪枝结构而不是和以往研究一样挑选出最重要的channel

…, we first propose to shrink the combinations where the preserved channels are limited to a specific space, … . And then, we formulate the search of optimal pruned structure as an optimization problem and integrate the ABC algorithm to solve it in an automatic manner to lessen human interference.

新方法的大概流程是:将通道组合**压缩**到一个特定空间,然后将“(在这个特定空间)搜索最优剪枝结构”作为一个**优化**问题;并且使用ABCPruner 来自动求解这个优化问题。

1、Introduce

Channel pruning targets at removing the entire channel in each layer, which is straightforward but challenging because removing channels in one layer might drastically change the input of the next layer.

简单提到通道剪枝的弊端:删除某一个层中的一部分通道,极大可能会影响了下一层的输入。(但是只是提了一下,通篇没有给出解决方法)

Most cutting-edge practice implements channel pruning by selecting channels (filters) based on rule-of-thumb designs.

然后说到现有大多数方法是根据经验规则实现通道剪枝:一类是根据预训练模型的每个filter的权重,根据权重大小排序舍弃不重要的权重;另一个是手工规则,即人工决定剪枝率等超参数,进行剪枝。然后引用一些别人的论文,这里省略。

The motivation of our ABCPruner is two-fold.

作者根据上面总结的两种现状,以及他人论文,表明:首先通道剪枝的本质是寻找最优剪枝结构,而不是重要的通道;再者认为较少的人为干涉会好一些,所以引用他人文章方法,将超参数自动控制方法应用到剪枝中。

Given a CNN with L layers, the combinations of pruned structure could be \(\prod _ { j = 1 } ^ { L } c _ { j }\), \(L\) is layers number, \(c_j\) is channel number in the \(j\)-th layer. The combination overhead is extremely intensive.

To solve this problem, we propose to shrink the combinations by limiting the number of preserved channels to \(\{ 0.1 c _{ j } , 0.2 c _{ j } , \ldots , \alpha c _{ j } \}\) where the value of α falls in {10%,20%, …,100%}, which shows that there are 10α feasible solutions for each layer, …

对于一个有L层卷积的网络,裁剪的可能性方案有\(\prod _ { j = 1 } ^ { L } c _ { j }\),作者认为这个方案太多,应该少点。(这里的少点没有给出合理性解释,通篇看下来只是为了方便而提出的);

因此作者将每层的裁剪方案限制成十个,具体做法就是:按照该层的filter数目c,取10%c,20%c,…,100%c 十个数作为该层裁剪的选择空间。也就是作者说的:将通道组合压缩到一个特定空间。

整个剪枝方案:

  1. 初始化一个 structure set,set中的每个元素表示每层要保留的通道数目(实际上是初始化多个structure set,每个set代表一种剪枝方案)

  2. 根据这个集合对每层进行随机裁剪

  3. 训练一定epochs,测试精度

  4. 然后使用ABC来更新 structure set

  5. 重复 2,3,4

  6. 挑选出最优结构,进行微调

2、Related Work

Network Pruning: weight pruning and channel pruning.

AutoML: automatic pruning

大概说了一下权重剪枝和通道剪枝的特性和情况,然后提到AutoML的好处,又因为[1810.05270] Rethinking the Value of Network Pruning (arxiv.org)这里面提到的“通道剪枝的关键在于剪枝结构而不是选择‘重要’通道”,启发了现在的方法。

3、The Proposed ABCPrunner

Given a CNN model \(N\) that contains \(L\) convolutional layers and its filters set \(W\), we refer to \(C = (c_1, c_2, …, c_L)\) as the network structure of \(N\), where \(c_j\) is the channel number of the \(j\)-th layer. Channel pruning aims to remove a portion of filters in \(W\) while keeping a comparable or even better accuracy.

For any pruned model N0, we denote its structure as \(C^ { \prime } = \left( c _ { 1 } ^ { \prime } , c _ { 2 } ^ { \prime } , \ldots , c _ { L } ^ { \prime } \right)\), where \(c _ { j } ^ { \prime } \leq c _ { j }\) is the channel number of the pruned model in the \(j\)-th layer.

定义一下参数:模型 \(N\)、卷积层数 \(L\)、过滤器集合 \(W\)、结构集合 \(C = (c_1, c_2, …, c_L)\) , \(c_j\) 是每层中的通道数。而剪枝中网络使用的机构集合定义为 \(C^ { \prime } = \left( c _ { 1 } ^ { \prime } , c _ { 2 } ^ { \prime } , \ldots , c _ { L } ^ { \prime } \right)\), \(c _ { j } ^ { \prime } \leq c _ { j }\)。

目标:移除过滤器集合 \(W\) 中一部分过滤器,精度基本不变

在前面提到,每层的 \(c _ { j } ^ { \prime }\) 的大小由 \(c_j\) 即 filter 数量决定上限,如果不做处理,整个模型所有层一组合,情况太多。所以作者这边做了限制,每层的\(c _ { j } ^ { \prime }\) 只能取值为设定的 \(c_j\) 的梯度百分比,具体形式为:\(c _{ i } ^ { \prime } \in \left\{ 0.1 c _{ i } , 0.2 c _{ i } , \ldots , \alpha c _{ i } \right\} ^ { L }\)

Given the training set \(\mathcal { T } _ { \text {train} }\) and test set , \(\mathcal { T } _ { \text {test} }\) we aim to find the optimal combination of \(C ^ { \prime }\), such that the pruned model \(\mathcal { N } ^ { \prime }\) trained/fine-tuned on\(\mathcal { T } _ { \text {train} }\) obtains the best accuracy. To that effect, we formulate our channel pruning problem as:

\[\left( C ^ { \prime } \right) ^ { * } = \underset { C ^ { \prime } } { \arg \max } \operatorname { acc } \left( \mathcal { N } ^ { \prime } \left( C ^ { \prime } , \mathbf { W } ^ { \prime } ; \mathcal { T } _ { \text {train} } \right) ; \mathcal { T } _ { \text {test} } \right)
\]

\(c _{ i } ^ { \prime } \in \left\{ 0.1 c _{ i } , 0.2 c _{ i } , \ldots , \alpha c _{ i } \right\} ^ { L }\)

where \(\mathbf { W } ^ { \prime }\) is the weights of pruned model trained/fine-tuned on \(\mathcal { T } _ { \text {train} }\), and \(acc(·)\) denotes the accuracy on \(\mathcal { T } _ { \text {test} }\) for \(\mathcal { N } ^ { \prime }\) with structure \(\mathcal { C } ^ { \prime }\).

在一次策略中(即一次自定义的结构集合中),根据\(C = (c_1, c_2, …, c_L)\) 初始化不同的\(C^ { \prime } = \left( c _ { 1 } ^ { \prime } , c _ { 2 } ^ { \prime } , \ldots , c _ { L } ^ { \prime } \right)\),然后训练、微调并比较,获取精度最高时对应的参数

In particular, we initialize a set of \(n\) pruned structures \(\left\{ C _ { j } ^ { \prime } \right\} _ { j = 1 } ^ { n }\) with the \(i\)-th element \(c^{\prime}_{ji}\) of \(C^{\prime}_j\) randomly sampled from \(\left\{ 0.1 c _{ i } , 0.2 c _{ i } , \ldots , \alpha c _{ i } \right\}\). Accordingly, we obtain a set of pruned model \(\left\{ \mathcal { N } _ { j } ^ { \prime } \right\} _ { j = 1 } ^ { n }\) and a set of pruned weights \(\left\{ \mathbf { W } _ { j } ^ { \prime } \right\} _ { j = 1 } ^ { n }\).

Each pruned structure \(C^{\prime}_j\) represents a potential solution to the optimization problem.

自动结构搜索的具体做法是,随机初始化多组structures set。

然后作者得到多组 structures set 后,使用了ABC算法来更新这多组structures set。

这里插入ABC算法的相关介绍


ABC算法|[Artificial bee colony algorithm](Artificial bee colony algorithm - Wikipedia)

1、原理

标准的ABC算法将人工蜂群分为三类:被雇佣蜂,观察蜂以及侦察蜂。

  • 被雇佣蜂负责采蜜,根据记忆中的食物位置,负责搜索食物邻域内的其他食物;
  • 被雇佣蜂将找到的食物位置信息分享给观察蜂,观察蜂选择哪一个是更好的食物来源
  • 当在限定的次数内都没有搜索到一个高于阈值的理想食物来源,需要抛弃食物来源,被雇佣蜂成为侦察蜂,随机搜索新的食物来源。

2、实现

2.1 刚开始,对整个蜂群进行初始化。蜂群的规模为2SN,被雇佣蜂和观察蜂的数量相等,均为SN。蜜源的数量与采蜜蜂相等,也为SN。使用 \({\displaystyle X_{i}=\{x_{i,1},x_{i,2},\ldots ,x_{i,n}\}}\) 表示第 \(i\) 次的搜索结构,\(n\) 表示维度。

2.2 现在受雇佣蜂根据记忆位置 \(X_{i}\) 生成新的位置 \(V_i\),生成公式为:

\[{\displaystyle v_{i,k}=x_{i,k}+\Phi _{i,k}\times (x_{i,k}-x_{j,k})}
\]

\(\Phi _{i,k}\) 是 [-1, 1] 中的随机数,\(x_{i,n}\) 是随机第 \(j\) 次方案的随机第 \(k\) 维度数据。

2.3 观察蜂观察这两个位置的食物:根据 \(X_i\) 和 \(V_i\) 的适应值(遗传算法中的一个说法)比较优劣,选择更优的那个。

2.4 在一定次数以后(称为 limit),对比这些蜜源位置的适应值,记录全局最优的蜜源。公式如下:$${\displaystyle P_{i}={\frac {\mathrm {fit} _{i}}{\sum _{j}{\mathrm {fit} _{j}}}}}$$

2.5 如果在一定次数内位置没有变化,那么就放弃这个食物来源,重现初始化一个食物来源

算法流程

创新点

  1. 将ABC引入到剪枝中

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器

你可能感兴趣的文章