Skip to content

Bad Performance with default_observer for quantization #2849

Open
@zacurr

Description

@zacurr

❓ Questions and Help

Please note that this issue tracker is not a help form and this issue will be closed.

We have a set of listed resources available on the website. Our primary means of support is our discussion forum:

I tried to quantize mobilenet v2 from float model file.
I found defaut_observer is used for activation in QConfig.
https://github.com/pytorch/vision/blob/master/torchvision/models/quantization/utils.py#L27

I got bad imagenet classification accuracy with this configuration.

https://pytorch.org/docs/stable/quantization.html
Here, the following configuration is recommended.
qconfig = torch.quantization.get_default_qconfig('qnnpack')

this uses HistogramObserver.
qconfig = QConfig(activation=HistogramObserver.with_args(reduce_range=False),
weight=default_weight_observer)

When I changed default observer to HistogramObserver, I got much better accuracy.

I think the following configuration should be changed as recommended in official docs.
https://github.com/pytorch/vision/blob/master/torchvision/models/quantization/utils.py#L27

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions