"Fossies" - the Fresh Open Source Software Archive

Member "pytorch-1.8.2/docs/source/torch.quantization.rst" (23 Jul 2021, 1892 Bytes) of package /linux/misc/pytorch-1.8.2.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field.

torch.quantization

torch.quantization

This module implements the functions you call directly to convert your model from FP32 to quantized form. For example the ~torch.quantization.prepare is used in post training quantization to prepares your model for the calibration step and ~torch.quantization.convert actually converts the weights to int8 and replaces the operations with their quantized counterparts. There are other helper functions for things like quantizing the input to your model and performing critical fusions like conv+relu.

Top-level quantization APIs

quantize

quantize_dynamic

quantize_qat

prepare

prepare_qat

convert

QConfig

QConfigDynamic

Preparing model for quantization

fuse_modules

QuantStub

DeQuantStub

QuantWrapper

add_quant_dequant

Utility functions

add_observer

swap_module

propagate_qconfig

default_eval_fn

Observers

ObserverBase

MinMaxObserver

MovingAverageMinMaxObserver

PerChannelMinMaxObserver

MovingAveragePerChannelMinMaxObserver

HistogramObserver

FakeQuantize

NoopObserver

Debugging utilities

get_observer_dict

RecordingObserver

torch

nn.intrinsic