quantization for tvmquantization for tvm # user can override the annotate function...

7
Quantization for TVM Ziheng Jiang TVM Conference, Dec 12th 2018

Upload: others

Post on 23-Jan-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Quantization for TVMQuantization for TVM # user can override the annotate function @register_annotate_function("nn.conv2d", override=True) def annotate_conv2d(ref_call, …

Quantization for TVMZiheng Jiang

TVM Conference, Dec 12th 2018

Page 2: Quantization for TVMQuantization for TVM # user can override the annotate function @register_annotate_function("nn.conv2d", override=True) def annotate_conv2d(ref_call, …

Quantization for TVMWhat is Quantization?

source: Han et al

Converting weight value to low-bit integer like 8bit precision from float-point without significant accuracy drop.

Page 3: Quantization for TVMQuantization for TVM # user can override the annotate function @register_annotate_function("nn.conv2d", override=True) def annotate_conv2d(ref_call, …

Quantization for TVM

Convert

Apply

TrainFrontend DL Framework

Relay: High-Level Graph IR

Quantization

Deploy

Gain Compression & Acceleration:- Less storage space- Faster arithmetic operation- Friendly to accelerator and ultra

low-power embedded devices

Page 4: Quantization for TVMQuantization for TVM # user can override the annotate function @register_annotate_function("nn.conv2d", override=True) def annotate_conv2d(ref_call, …

Quantization for TVMChoice Spaces for Quantization- number of bit

- 4bit, 8bit, 16bit- quantization scheme:

- symmetric, asymmetric, etc.- hardware constraint:

- e.g. prefer integer shift instead of float multiplication

GoalInstead of proposing “the only right way to achieve quantization in TVM”, we would like to build a quantization workflow which can be customized flexibly.

Page 5: Quantization for TVMQuantization for TVM # user can override the annotate function @register_annotate_function("nn.conv2d", override=True) def annotate_conv2d(ref_call, …

Quantization for TVMConv2D Batch

Norm ReLU Conv2D

Conv2D Mul, Add ReLu

Simulated Quantize Conv2DSimulated

Quantize

Simulated Quantize

Simulated Quantize

Conv2D Mul, Add ReLu

Shift Clip Cast Conv2DMul Clip

Cast

Mul Clip Cast

Mul Clip Cast

Original

After Annotate

After Realize

SimQ simulates the rounding error and saturating error during quantizing. Its argument will get

tuned during calibrate.

SimQ(nbit, range, sign) =Clip(Round( x

r * 2nbit−sign)) * r

2nbit−sign

W1 W2

W1 W2

f32 f32 f32 f32 f32

f32f32 f32 f32 f32

f32f32 f32

f32

f32

f32

i8

i8 i8f32 f32

i32 i32 i8 i32

Page 6: Quantization for TVMQuantization for TVM # user can override the annotate function @register_annotate_function("nn.conv2d", override=True) def annotate_conv2d(ref_call, …

Quantization for TVM

# user can override the annotate function@register_annotate_function("nn.conv2d", override=True)def annotate_conv2d(ref_call, new_args, ctx): lhs, rhs = new_args lhs = attach_simulated_quantize(lhs, sign=False, rounding='round') rhs = attach_simulated_quantize(lhs, sign=False, rounding='stochastic_round') return expr.Call(ref_call.op, [lhs, rhs], ref_call.attrs)

# assuming we have an existed mxnet model, convert it to relay graphgraph, params = relay.frontend.from_mxnet(mxnet_model)

# quantize the relay graph with all kinds of configurewith qconfig(nbit_dict={QFieldKind.ACTIVATION: 24}, global_scale=8.0, skip_k_conv=1): qgraph, qparams = quantize(graph, params)

# ...build and deploy it locally or remotely with tvm

Code Sample

Page 7: Quantization for TVMQuantization for TVM # user can override the annotate function @register_annotate_function("nn.conv2d", override=True) def annotate_conv2d(ref_call, …

Quantization for TVM

End to End Performance

Global Scale Accuracy

2.0 64.1%

4.0 68.1%

8.0 69.5%

16.0 69.6%

Accuracy Drop with ResNet18 (original 70.8%)

Demonstration with 8bit Symmetric Quantization

Time/ms Cortex A53 VTA

ResNet18 307.09 64.87

MobileNet 131.14 51.96