summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTim Dettmers <TimDettmers@users.noreply.github.com>2022-09-05 16:10:47 -0700
committerGitHub <noreply@github.com>2022-09-05 16:10:47 -0700
commit4e4668ab09ff9fb7afc56f5faf1bdb08d32ecfb3 (patch)
tree70264a36b85e92706af5b5d4943cb2264e40fd4d
parent9d60b3c5279641ba936facd710c722ebe52fcf40 (diff)
parente4e13db8123170e14683aa454739e2bfcff4a6e0 (diff)
Merge pull request #13 from chessgecko/patch-1
fix param name
-rw-r--r--README.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/README.md b/README.md
index 0ae3afa..1f94317 100644
--- a/README.md
+++ b/README.md
@@ -23,12 +23,12 @@ Resources:
1. Comment out torch.nn.Linear: ``#linear = torch.nn.Linear(...)``
2. Add bnb 8-bit linear light module: ``linear = bnb.nn.Linear8bitLt(...)`` (base arguments stay the same)
3. There are two modes:
- - Mixed 8-bit training with 16-bit main weights. Pass the argument ``use_fp16_weights=True`` (default)
- - Int8 inference. Pass the argument ``use_fp16_weights=False``
+ - Mixed 8-bit training with 16-bit main weights. Pass the argument ``has_fp16_weights=True`` (default)
+ - Int8 inference. Pass the argument ``has_fp16_weights=False``
4. To use the full LLM.int8() method, use the ``threshold=k`` argument. We recommend ``k=6.0``.
```python
# LLM.int8()
-linear = bnb.nn.Linear8bitLt(dim1, dim2, bias=True, use_fp16_weights=False, threshold=6.0)
+linear = bnb.nn.Linear8bitLt(dim1, dim2, bias=True, has_fp16_weights=False, threshold=6.0)
# inputs need to be fp16
out = linear(x.to(torch.float16))
```