Skip to content

paperwave/ServerlessLLM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ServerlessLLM

| Documentation | Paper | Discord | WeChat |

ServerlessLLM

ServerlessLLM (sllm, pronounced "slim") is an open-source serverless framework designed to make custom and elastic LLM deployment easy, fast, and affordable. As LLMs grow in size and complexity, deploying them on AI hardware has become increasingly costly and technically challenging, limiting custom LLM deployment to only a select few. ServerlessLLM solves these challenges with a full-stack, LLM-centric serverless system design, optimizing everything from checkpoint formats and inference runtimes to the storage layer and cluster scheduler.

News

  • [11/24] We have added experimental support of fast checkpoint loading for AMD GPUs (ROCm) when using with vLLM, PyTorch and HuggingFace Accelerate. Please refer to the documentation for more details.
  • [10/24] ServerlessLLM was invited to present at a global AI tech vision forum in Singapore.
  • [10/24] We hosted the first ServerlessLLM developer meetup in Edinburgh, attracting over 50 attendees both offline and online. Together, we brainstormed many exciting new features to develop. If you have great ideas, we’d love for you to join us!
  • [10/24] We made the first public release of ServerlessLLM. Check out the details of the release here.
  • [09/24] ServerlessLLM now supports embedding-based RAG + LLM deployment. We’re preparing a blog and demo—stay tuned!
  • [08/24] ServerlessLLM added support for vLLM.
  • [07/24] We presented ServerlessLLM at Nvidia's headquarters.
  • [06/24] ServerlessLLM officially went public.

Goals

ServerlessLLM is designed to support multiple LLMs in efficiently sharing limited AI hardware and dynamically switching between them on demand, which can increase hardware utilization and reduce the cost of LLM services. This multi-LLM scenario, commonly referred to as Serverless, is highly sought after by AI practitioners, as seen in solutions like Serverless Inference, Inference Endpoints, and Model Endpoints. However, these existing offerings often face performance overhead and scalability challenges, which ServerlessLLM effectively addresses through three key capabilities:

ServerlessLLM is Fast:

  • Supports leading LLM inference libraries like vLLM and HuggingFace Transformers. Through vLLM, ServerlessLLM can support various types of AI hardware (summarized by vLLM at here)
  • Achieves 5-10X faster loading speeds compared to Safetensors and the PyTorch Checkpoint Loader.
  • Features an optimized model loading scheduler, offering 5-100X lower start-up latency than Ray Serve and KServe.

ServerlessLLM is Cost-Efficient:

  • Allows multiple LLM models to share GPUs with minimal model switching overhead and supports seamless inference live migration.
  • Maximizes the use of local storage on multi-GPU servers, reducing the need for expensive storage servers and excessive network bandwidth.

ServerlessLLM is Easy-to-Use:

Getting Started

  1. Install ServerlessLLM with pip or from source.
# On the head node
conda create -n sllm python=3.10 -y
conda activate sllm
pip install serverless-llm

# On a worker node
conda create -n sllm-worker python=3.10 -y
conda activate sllm-worker
pip install serverless-llm[worker]
  1. Start a local ServerlessLLM cluster using the Quick Start Guide.

  2. Want to try fast checkpoint loading in your own code? Check out the ServerlessLLM Store Guide.

Documentation

To install ServerlessLLM, please follow the steps outlined in our documentation. ServerlessLLM also offers Python APIs for loading and unloading checkpoints, as well as CLI tools to launch an LLM cluster. Both the CLI tools and APIs are demonstrated in the documentation.

Benchmark

Benchmark results for ServerlessLLM can be found here.

Community

ServerlessLLM is maintained by a global team of over 10 developers, and this number is growing. If you're interested in learning more or getting involved, we invite you to join our community on Discord and WeChat. Share your ideas, ask questions, and contribute to the development of ServerlessLLM. For becoming a contributor, please refer to our Contributor Guide.

Citation

If you use ServerlessLLM for your research, please cite our paper:

@inproceedings{fu2024serverlessllm,
  title={ServerlessLLM: Low-Latency Serverless Inference for Large Language Models},
  author={Fu, Yao and Xue, Leyang and Huang, Yeqi and Brabete, Andrei-Octavian and Ustiugov, Dmitrii and Patel, Yuvraj and Mai, Luo},
  booktitle={18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24)},
  pages={135--153},
  year={2024}
}

About

Serverless LLM Serving for Everyone.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 66.5%
  • C++ 22.4%
  • CMake 4.4%
  • Shell 3.0%
  • Cuda 1.9%
  • Dockerfile 0.8%
  • Other 1.0%