summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTim Dettmers <tim.dettmers@gmail.com>2022-10-09 19:31:43 -0700
committerTim Dettmers <tim.dettmers@gmail.com>2022-10-09 19:31:43 -0700
commitb844e104b79ddc06161ff975aa93ffa9a7ec4801 (patch)
tree1a74bf0112c0cf91694b40f55b30f9177c22a28f
parent62b6a9399de913cd83a45bb52b6bb3444e59a23b (diff)
Updated docs (#32) and changelog.
-rw-r--r--CHANGELOG.md13
-rw-r--r--README.md4
-rw-r--r--setup.py2
3 files changed, 18 insertions, 1 deletions
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a26a0e7..84333e8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -117,3 +117,16 @@ Features:
Bug fixes:
- fixed an issue where too many threads were created in blockwise quantization on the CPU for large tensors
+
+
+### 0.35.0
+
+#### CUDA 11.8 support and bug fixes
+
+Features:
+ - CUDA 11.8 support added and binaries added to the PyPI release.
+
+Bug fixes:
+ - fixed a bug where too long directory names would crash the CUDA SETUP #35 (thank you @tomaarsen)
+ - fixed a bug where CPU installations on Colab would run into an error #34 (thank you @tomaarsen)
+ - fixed an issue where the default CUDA version with fast-DreamBooth was not supported #52
diff --git a/README.md b/README.md
index eac64a5..7d35a80 100644
--- a/README.md
+++ b/README.md
@@ -10,6 +10,8 @@ Resources:
- [LLM.int8() Paper](https://arxiv.org/abs/2208.07339) -- [LLM.int8() Software Blog Post](https://huggingface.co/blog/hf-bitsandbytes-integration) -- [LLM.int8() Emergent Features Blog Post](https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/)
## TL;DR
+**Requirements**
+Linux distribution (Ubuntu, MacOS, etc.) + CUDA >= 10.0. LLM.int8() requires Turing or Ampere GPUs.
**Installation**:
``pip install bitsandbytes``
@@ -52,6 +54,8 @@ Hardware requirements:
Supported CUDA versions: 10.2 - 11.7
+The bitsandbytes library is currently only supported on Linux distributions. Windows is not supported at the moment.
+
The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the ["Get Started"](https://pytorch.org/get-started/locally/) instructions on the official website.
## Using bitsandbytes
diff --git a/setup.py b/setup.py
index 610684b..3f5dafd 100644
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,7 @@ def read(fname):
setup(
name=f"bitsandbytes",
- version=f"0.34.0",
+ version=f"0.35.0",
author="Tim Dettmers",
author_email="dettmers@cs.washington.edu",
description="8-bit optimizers and matrix multiplication routines.",