Getting Started Samples for AI Tools
The AI Tools gives data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architectures. The components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance from preprocessing through machine learning, and provides interoperability for efficient model development.
You can find more information at AI Tools.
Users could learn how to run samples for different components in AI Tools with those getting started samples.
License
Code samples are licensed under the MIT license. See License.txt for details.
Third party program Licenses can be found here: third-party-programs.txt
| AI Tools preset | Component | Folder | Description |
|---|---|---|---|
| Classical Machine Learning | Scikit-learn (OneDAL) | Intel_Extension_For_SKLearn_GettingStarted | Speed up a scikit-learn application using Intel oneDAL. |
| Classical Machine Learning | Intel® Optimization for XGBoost* | Get Started with XGBoost | Set up and trains an XGBoost* model on datasets for prediction. |
| Deep Learning | Intel® Extension of PyTorch | Getting Started with Intel® Extension for PyTorch* (IPEX) | A simple training example for Intel® Extension of PyTorch. |
| Deep Learning | Intel® Neural Compressor (INC) | Intel® Neural Compressor (INC) Sample-for-PyTorch | Performs INT8 quantization on a Hugging Face BERT model. |
| Deep Learning | Intel® Neural Compressor (INC) | Intel® Neural Compressor (INC) Sample-for-Tensorflow | Quantizes a FP32 model into INT8 by Intel® Neural Compressor (INC) and compares the performance between FP32 and INT8. |
| Deep Learning | JAX Getting Started Sample | IntelJAX GettingStarted | The JAX Getting Started sample demonstrates how to train a JAX model and run inference on Intel® hardware. |
| Deep Learning | oneCCL Bindings for PyTorch | Intel oneCCL Bindings For PyTorch GettingStarted | Guides users through the process of running a simple PyTorch* distributed workload on both GPU and CPU. |
| Deep Learning | ONNX Runtime* | Quickstart Examples for PyTorch, TensorFlow, and SciKit Learn | Train a model using your favorite framework, export to ONNX format and inference in any supported ONNX Runtime language. |
*Other names and brands may be claimed as the property of others. Trademarks