Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

MKL-DNN Quantization Examples and README #12808

Merged
merged 28 commits into from
Oct 19, 2018
Merged
Changes from 1 commit
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
1d7c88e
add gluoncv support
xinyu-intel Oct 8, 2018
855a4dd
add ssd readme
xinyu-intel Oct 9, 2018
68df9bc
improve ssd readme
xinyu-intel Oct 9, 2018
f9e30fe
add custom readme
xinyu-intel Oct 9, 2018
8d349f5
add ssd model link
xinyu-intel Oct 9, 2018
73dc2bf
add squeezenet
xinyu-intel Oct 9, 2018
82aac56
add ssd quantization script
xinyu-intel Oct 9, 2018
9942129
fix topo of args
xinyu-intel Oct 9, 2018
a93bbff
improve custom readme
xinyu-intel Oct 10, 2018
f7f6bcb
fix topo bug
xinyu-intel Oct 11, 2018
9530732
fix squeezenet
xinyu-intel Oct 12, 2018
b93cb29
add squeezenet accuracy
xinyu-intel Oct 12, 2018
15545fd
Add initializer for min max to support quantization
ZhennanQin Oct 12, 2018
1baaeaf
add dummy data inference
xinyu-intel Oct 12, 2018
a049351
rebase code
xinyu-intel Oct 12, 2018
19283ad
add test case for init_param
xinyu-intel Oct 12, 2018
40195bc
add subgraph docs
xinyu-intel Oct 12, 2018
9c0c2bc
improve docs
xinyu-intel Oct 14, 2018
2b20043
add two models and fix default rgb_std to 1
xinyu-intel Oct 14, 2018
1df46d4
fix doc link
xinyu-intel Oct 14, 2018
a5b309d
improve MKLDNN_README
xinyu-intel Oct 14, 2018
4d1338d
add quantization for mobilenetv1
xinyu-intel Oct 15, 2018
c7a35dc
Merge remote-tracking branch 'upstream/master' into mkldnn_quantizati…
xinyu-intel Oct 15, 2018
f8fbc3f
fix ssd benchmark_score label shapes
xinyu-intel Oct 15, 2018
05b88cd
add resnet101_v1 and inceptionv3 support
xinyu-intel Oct 16, 2018
32b44d3
Refine some descriptions in the MKLDNN_README
juliusshufan Oct 16, 2018
a80a628
improve docs
xinyu-intel Oct 16, 2018
bf35236
improve link in perf.md
xinyu-intel Oct 16, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
improve ssd readme
  • Loading branch information
xinyu-intel committed Oct 9, 2018
commit 68df9bce67e67a1adb942225a2aaa6103726eb6a
4 changes: 2 additions & 2 deletions example/quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ python imagenet_inference.py --symbol-file=./model/resnet50_v1-quantized-5batche

## SSD-VGG

Following the [instruction](https://github.com/apache/incubator-mxnet/tree/master/example/ssd#train-the-model) in [example/ssd](https://github.com/apache/incubator-mxnet/tree/master/example/ssd) to train a FP32 `SSD-VGG16_reduced_300x300` model based on Pascal VOC dataset. You can also download our pre-trained model and packed binary data from [here](http://data.mxnet.io/data/) and extract to `model/` and `data/`dictionary.
Go to [example/ssd](https://github.com/apache/incubator-mxnet/tree/master/example/ssd) dictionary. Following the [instruction](https://github.com/apache/incubator-mxnet/tree/master/example/ssd#train-the-model) in [example/ssd](https://github.com/apache/incubator-mxnet/tree/master/example/ssd) to train a FP32 `SSD-VGG16_reduced_300x300` model based on Pascal VOC dataset. You can also download our pre-trained model and packed binary data from [here](http://data.mxnet.io/data/) and extract to `model/` and `data/`dictionary.

Then, use the following command for quantization. By default, this script use 5 batches(32 samples per batch) for naive calib:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what naive calib is... maybe use the full word, calibration?
Also maybe define what this is doing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see in the code that there are three modes. I think these should be discussed in the documentation and not just in the code comments / help output.


Expand All @@ -75,4 +75,4 @@ python evaluate.py --cpu --num-batch 10 --batch-size 224 --deploy --prefix=./mod

# Launch INT8 Inference
python evaluate.py --cpu --num-batch 10 --batch-size 224 --deploy --prefix=./model/cqssd_
```
```