ilumiere - Overview

Skip to content

Navigation Menu

Sign in

Appearance settings

View ilumiere's full-sized avatar

💭

I may be slow to respond.

💭

I may be slow to respond.

Block or report ilumiere

Pinned Loading

  1. Forked from InternLM/lmdeploy

    LMDeploy is a toolkit for compressing, deploying, and serving LLMs.

    Python

  2. Forked from ollama/ollama

    Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.

    Go

  3. Forked from sgl-project/sglang

    SGLang is yet another fast serving framework for large language models and vision language models.

    Python

  4. Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python