Fast Ops for the DiffKt Tensor library
Dev setup
Local prereqs
Install onednn (formerly mkl-dnn): brew install onednn
Install openMP: brew install libomp
Install eigen (Optional): brew install eigen
Install MKL (Optional) on mac:
- download Intel oneAPI Base Toolkit : https://software.intel.com/content/www/us/en/develop/tools/oneapi/all-toolkits.html#base-kit
- install Intel oneAPI Base Toolkit : select
Custom Installation-> un-select all butIntel oneAPI Math Kernel Library. Then consent, install, finish. - download Intel oneAPI HPC Toolkit : https://software.intel.com/content/www/us/en/develop/tools/oneapi/all-toolkits.html#base-kit
- install Intel oneAPI HPC Toolkit : select
Custom Installation-> un-selectIntel Fortan Compiler Classic. Then continue, skip. - run
source /opt/intel/oneapi/setvars.sh(note thatsetvars.shonly sets up the environment variables for the terminal it runs in. So we need to run this every time before using MKL in a new terminal.)
Devserver prereqs
Install DNNL:
# from <repo root>/cpp/ops
wget $(fwdproxy-config wget) https://github.com/oneapi-src/oneDNN/releases/download/v2.1/dnnl_lnx_2.1.0_cpu_gomp.tgz
tar -zxvf dnnl_lnx_2.1.0_cpu_gomp.tgz
mv dnnl_lnx_2.1.0_cpu_gomp dnnl
Or you can grab a more recent release from here.
Setup
To build:
# One-time or if you update the cmake version or local prereq version
rm -rf build
mkdir build && cd build
cmake ..
# Every time
make
This should produce libsparseops_jni and libops_jni in {repo_root}/kotlin/api/src/main/resources.
To auto-format code:
# From <repo root>/cpp/ops:
./scripts/format.sh
Upgrading DNNL
Update onednn: brew upgrade onednn
Download a specific version of onednn (formerly mkl-dnn):
brew search mkl-dnnwill show PR links for older brew formulas to download mkl-dnn. If no PRs show up, you'll have to manually navigate to the correct github commit.- Navigate to desired PR in browser and navigate to the raw
mkl-dnn.rbfile. - Download the raw
mkl-dnn.rbfile.echo $(curl -fsSL <url to mkl-dnn.rb>) > mkl-dnn.rb. For example to download version 1.4:echo -e "$(curl -fsSL https://raw.githubusercontent.com/chenrui333/homebrew-core/93fd9e0e95cbfd065ef088bf5500129d606a1b38/Formula/mkl-dnn.rb)" > mkl-dnn.rb. brew install --build-from-source mkl-dnn.rb
For newer versions (1.6.2+), replace mkl-dnn with onednn in the instructions above.
Select the sparse computation implementation
Three options are:
- MKL : a high performance parallel library.
- Eigen : a easy-to-use open-source, but not fully parallelized, BLAS library.
- OMP : parallel implementations with OpenMP. It doesn't rely on any third party library.
See section Local prereqs for installation instructions for Eigen and
MKL.
Select with cmake:
cmake variable SPARSE_LIB defines which implementation to use.
cmake -DSPARSE_LIB=MKL .., choose MKLcmake -DSPARSE_LIB=EIGEN .., choose Eigencmake -DSPARSE_LIB=OMP .., choose OMPcmake .., thenSPARSE_LIB=NONE. That is the user doesn't give preference on which implementation to use. Then predefined preference will be applied. Currently it's:MKL > Eigen > OMP. If a preferred third party library is not installed, then check on the next preferred choice. For example, ifMKLis not installed, then chooseEigen. IfEigenis also not installed, then chooseOMP.