kunal-vaishnavi - Overview

Popular repositories Loading

  1. Forked from facebookincubator/AITemplate

    AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.

    Python

  2. Forked from openai/whisper

    Robust Speech Recognition via Large-Scale Weak Supervision

    Python

  3. Forked from microsoft/onnxruntime

    ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

    C++

  4. Forked from huggingface/optimum

    šŸŽļø Accelerate training and inference of šŸ¤— Transformers with easy to use hardware optimization tools

    Python

  5. Forked from NVIDIA/TensorRT

    NVIDIAĀ® TensorRTā„¢, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applicat…

    C++

  6. Forked from huggingface/transformers

    šŸ¤— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    Python