See https://github.com/eladhoffer/convNet.pytorch for updated version of this code
Code to replicate results in Scalable Methods for 8-bit Training of Neural Networks
e.g: running an 8-bit quantized resnet18 from the paper on ImageNet
python main.py --model resnet_quantized --model_config "{'depth': 18}" --save quantized_resnet18 --dataset imagenet --b 128
- pytorch
- torchvision to load the datasets, perform image transforms
- pandas for logging to csv
- bokeh for training visualization
- Configure your dataset path at data.py.
- To get the ILSVRC data, you should register on their site for access: http://www.image-net.org/
Network model is defined by writing a .py file in models
folder, and selecting it using the model
flag. Model function must be registered in models/__init__.py
The model function must return a trainable network. It can also specify additional training options such optimization regime (either a dictionary or a function), and input transform modifications.
e.g for a model definition:
class Model(nn.Module):
def __init__(self, num_classes=1000):
super(Model, self).__init__()
self.model = nn.Sequential(...)
self.regime = [
{'epoch': 0, 'optimizer': 'SGD', 'lr': 1e-2,
'weight_decay': 5e-4, 'momentum': 0.9},
{'epoch': 15, 'lr': 1e-3, 'weight_decay': 0}
]
self.input_transform = {
'train': transforms.Compose([...]),
'eval': transforms.Compose([...])
}
def forward(self, inputs):
return self.model(inputs)
def model(**kwargs):
return Model()