个人信息Personal Information
教授
博士生导师
硕士生导师
性别:男
毕业院校:哈尔滨工业大学
学位:博士
所在单位:信息与通信工程学院
联系方式:http://peihuali.org
电子邮箱:peihuali@dlut.edu.cn
论文成果
当前位置: Official website ... >> 科学研究 >> 论文成果Is Second-order Information Helpful for Large-scale Visual Recognition?
点击次数:
论文类型:会议论文
发表时间:2017-01-01
收录刊物:CPCI-S、Scopus
卷号:2017-October
页面范围:2089-2097
摘要:By stacking layers of convolution and nonlinearity, convolutional networks (ConvNets) effectively learn from low-level to high-level features and discriminative representations. Since the end goal of large-scale recognition is to delineate complex boundaries of thousands of classes, adequate exploration of feature distributions is important for realizing full potentials of ConvNets. However, state-of-the-art works concentrate only on deeper or wider architecture design, while rarely exploring feature statistics higher than first-order. We take a step towards addressing this problem. Our method consists in covariance pooling, instead of the most commonly used first-order pooling, of high-level convolutional features. The main challenges involved are robust covariance estimation given a small sample of large-dimensional features and usage of the manifold structure of covariance matrices. To address these challenges, we present a Matrix Power Normalized Covariance (MPN-COV) method. We develop forward and backward propagation formulas regarding the nonlinear matrix functions such that MPN-COV can be trained end-to-end. In addition, we analyze both qualitatively and quantitatively its advantage over the well-known Log-Euclidean metric. On the ImageNet 2012 validation set, by combining MPN-COV we achieve over 4%, 3% and 2.5% gains for AlexNet, VGG-M and VGG-16, respectively; integration of MPN-COV into 50-layer ResNet outperforms ResNet-101 and is comparable to ResNet-152. The source code will be available on the project page: http://www. peihuali. org/MPN-COV.