From b844e104b79ddc06161ff975aa93ffa9a7ec4801 Mon Sep 17 00:00:00 2001 From: Tim Dettmers Date: Sun, 9 Oct 2022 19:31:43 -0700 Subject: Updated docs (#32) and changelog. --- README.md | 4 ++++ 1 file changed, 4 insertions(+) (limited to 'README.md') diff --git a/README.md b/README.md index eac64a5..7d35a80 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,8 @@ Resources: - [LLM.int8() Paper](https://arxiv.org/abs/2208.07339) -- [LLM.int8() Software Blog Post](https://huggingface.co/blog/hf-bitsandbytes-integration) -- [LLM.int8() Emergent Features Blog Post](https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/) ## TL;DR +**Requirements** +Linux distribution (Ubuntu, MacOS, etc.) + CUDA >= 10.0. LLM.int8() requires Turing or Ampere GPUs. **Installation**: ``pip install bitsandbytes`` @@ -52,6 +54,8 @@ Hardware requirements: Supported CUDA versions: 10.2 - 11.7 +The bitsandbytes library is currently only supported on Linux distributions. Windows is not supported at the moment. + The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the ["Get Started"](https://pytorch.org/get-started/locally/) instructions on the official website. ## Using bitsandbytes -- cgit v1.2.3