Skip to content
/ DCNets Public

Implementation for <Decoupled Networks> in CVPR'18.

License

Notifications You must be signed in to change notification settings

wy1iu/DCNets

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Decoupled Networks

By Weiyang Liu*, Zhen Liu*, Zhiding Yu, Bo Dai, Rongmei Lin, Yisen Wang, James Rehg, Le Song

(* equal contribution)

License

Decoupled Networks is released under the MIT License (refer to the LICENSE file for details).

Updates

  • Examples for ImageNet-2012
  • Examples for CIFAR-100

Contents

  1. Introduction
  2. Short Video Introduction
  3. Citation
  4. Requirements
  5. Usage

Introduction

Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations. Inspired by the observation that CNN-learned features are naturally decoupled with the norm of features corresponding to the intra-class variation and the angle corresponding to the semantic difference, we propose a generic decoupled learning framework which models the intra-class variation and semantic difference independently.

Specifically, we first reparametrize the inner product to a decoupled form and then generalize it to the decoupled convolution operator which serves as the building block of our decoupled networks. We present several effective instances of the decoupled convolution operator. Each decoupled operator is well motivated and has an intuitive geometric interpretation. Based on these decoupled operators, we further propose to directly learn the operator from data.

The latest version of our paper is available at arXiv and here. Our work is largely inspired and motivated by the observation that the CNN-learned features are naturally decoupled, as shown as follows.

As illustrated as follows, the central idea of decoupled networks is the decoupled convolution, which is used to replace all the original convolution operators.

Short Video Introduction

The following is a short video introduction by Zhen Liu.

DCNet_talk

Citation

If you find our work useful in your research, please consider to cite:

@InProceedings{Liu_2018_CVPR,
    author = {Liu, Weiyang and Liu, Zhen and Yu, Zhiding and Dai, Bo and Lin, Rongmei and Wang, Yisen and Rehg, James M. and Song, Le},
    title = {Decoupled Networks},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2018}
}

Requirements

  1. Python 2.7
  2. TensorFlow (Tested on version 1.01)
  3. numpy

Usage

Part 1: Clone the repositary

  • Clone the repositary.

    git clone https://github.com/wy1iu/DCNets.git

Part 2: CIFAR-100

  • Training DCNets with TanhConv + Cosine on CIFAR-100:

    cd $DCNET_ROOT/dcnet_cifar100/tanh_cos
    python train_resnet.py
  • To train other models, change the model name (tanh_cos) in the script above to your desired one.

Part 3: ImageNet-2012

  • Download ImageNet-2012 dataset and process the dataset with TensorFlow-Slim.

  • We provide one example with the modified Resnet-18 for ImageNet-2012. We use TanhConv magnitude function + Cosine angular activation in this implementation. The user can replace magnitude function and angular function with the other choices mentioned in the paper or any other customized functions.

    cd $DCNET_ROOT/dcnet_imagenet
    python train_DCNet.py
  • We provide our result for this implementation, which matches our reported result 88.9% in the paper.

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages