Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Fix the installation doc for MKL-DNN backend #12534

Merged
merged 3 commits into from
Sep 17, 2018
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/install/build_from_source.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ MXNet supports multiple mathematical backends for computations on the CPU:
* [Apple Accelerate](https://developer.apple.com/documentation/accelerate)
* [ATLAS](http://math-atlas.sourceforge.net/)
* [MKL](https://software.intel.com/en-us/intel-mkl) (MKL, MKLML)
* [MKLDNN](https://github.com/intel/mkl-dnn)
* [MKL-DNN](https://github.com/intel/mkl-dnn)
* [OpenBLAS](http://www.openblas.net/)

Usage of these are covered in more detail in the [build configurations](#build-configurations) section.
Expand Down Expand Up @@ -92,13 +92,13 @@ The following lists show this order by library and `cmake` switch.

For desktop platforms (x86_64):

1. MKLDNN (submodule) | `USE_MKLDNN`
1. MKL-DNN (submodule) | `USE_MKLDNN`
2. MKL | `USE_MKL_IF_AVAILABLE`
3. MKLML (downloaded) | `USE_MKLML`
4. Apple Accelerate | `USE_APPLE_ACCELERATE_IF_AVAILABLE` | Mac only
5. OpenBLAS | `BLAS` | Options: Atlas, Open, MKL, Apple

Note: If `USE_MKL_IF_AVAILABLE` is set to False then MKLML and MKLDNN will be disabled as well for configuration
Note: If `USE_MKL_IF_AVAILABLE` is set to False then MKLML and MKL-DNN will be disabled as well for configuration
backwards compatibility.

For embedded platforms (all other and if cross compiled):
Expand Down Expand Up @@ -129,8 +129,8 @@ It has following flavors:
<!-- [Removed until #11148 is merged.] This is the most effective option since it can be downloaded and installed automatically
by the cmake script (see cmake/DownloadMKLML.cmake).-->

* MKLDNN is a separate open-source library, it can be used separately from MKL or MKLML. It is
shipped as a subrepo with MXNet source code (see 3rdparty/mkldnn or the [mkl-dnn project](https://github.com/intel/mkl-dnn))
* MKL-DNN is a separate open-source library, it can be used separately from MKL or MKLML. It is
shipped as a subrepo with MXNet source code (see 3rdparty/mkldnn or the [MKL-DNN project](https://github.com/intel/mkl-dnn))

Since the full MKL library is almost always faster than any other BLAS library it's turned on by default,
however it needs to be downloaded and installed manually before doing `cmake` configuration.
Expand Down
10 changes: 9 additions & 1 deletion docs/install/ubuntu_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ pip install mxnet-cu92mkl

Alternatively, you can use the table below to select the package that suits your purpose.

| MXNet Version | Basic | CUDA | MKL | CUDA/MKL |
| MXNet Version | Basic | CUDA | MKL-DNN | CUDA/MKL-DNN |
|-|-|-|-|-|
| Latest | mxnet | mxnet-cu92 | mxnet-mkl | mxnet-cu92mkl |

Expand Down Expand Up @@ -167,6 +167,14 @@ If building on CPU and using OpenBLAS:
make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas
```

If building on CPU and using MKL and MKL-DNN (make sure MKL is installed according to [Math Library Selection](build_from_source.html#math-library-selection) and [MKL-DNN README](https://github.com/apache/incubator-mxnet/blob/master/MKLDNN_README.md)):

```bash
git clone --recursive https://github.com/apache/incubator-mxnet.git
cd mxnet
make -j $(nproc) USE_OPENCV=1 USE_BLAS=mkl USE_MKLDNN=1
```

If building on GPU and you want OpenCV and OpenBLAS (make sure you have installed the [CUDA dependencies first](#cuda-dependencies)):

```bash
Expand Down