summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTim Dettmers <dettmers@cs.washington.edu>2021-10-20 19:26:43 -0700
committerTim Dettmers <dettmers@cs.washington.edu>2021-10-20 19:26:43 -0700
commitd06c5776e47272fea23a8a23b32733668eec8d37 (patch)
tree0083b4df4f5f46e1cfca6d725b5052a67b90cf80
parenta6eae2e7f2bf03f268fcb6b055201ff6827684c4 (diff)
Updated changelog.
-rw-r--r--CHANGELOG.md19
1 files changed, 15 insertions, 4 deletions
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a5b29d8..0d3a379 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,11 +1,11 @@
-v0.0.21
+### 0.0.21
- Ampere, RTX 30 series GPUs now compatible with the library.
-v0.0.22:
+### 0.0.22:
- Fixed an error where a `reset_parameters()` call on the `StableEmbedding` would lead to an error in older PyTorch versions (from 1.7.0).
-v0.0.23:
+### 0.0.23:
Bugs:
- Unified quantization API: each quantization function now returns `Q, S` where `Q` is the quantized tensor and `S` the quantization state which may hold absolute max values, a quantization map or more. For dequantization all functions now accept the inputs `Q, S` so that `Q` is dequantized with the quantization state `S`.
@@ -18,7 +18,18 @@ Features:
- Block-wise quantization routines now support CPU Tensors.
-v0.0.24:
+### 0.0.24:
- Fixed a bug where a float/half conversion led to a compilation error for CUDA 11.1 on Turning GPUs.
- removed Apex dependency for bnb LAMB
+
+### 0.0.25:
+
+Features:
+ - Added `skip_zeros` for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
+ - Added support for Kepler GPUs. (#4)
+
+Bug fixes:
+ - fixed "undefined symbol: \_\_fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)
+
+