summaryrefslogtreecommitdiff
path: root/bitsandbytes/autograd/_functions.py
AgeCommit message (Collapse)Author
2022-09-17some kind of warning or something when this is first executed to make people ↵justheuristic
aware that a cast happens and the operation quantization is performed in fp16.
2022-09-11bug fixdbaranchuk
2022-09-11refactoringdbaranchuk
2022-09-11refactoringdbaranchuk
2022-09-11clarified an exception messagedbaranchuk
2022-09-11add memory effcient backward optiondbaranchuk
2022-09-10Merge pull request #1 from TimDettmers/mainDmitry Baranchuk
Update main branch
2022-08-29req_gradA for casted & more efficient and accurate fp16 backwarddbaranchuk
2022-08-26add dtype <-> fp16 castdbaranchuk
2022-08-25memory efficient fp16 backwarddbaranchuk
2022-08-24Remove unused codeMax Ryabinin
2022-08-24minor fixesdbaranchuk
2022-08-23minor fixesdbaranchuk
2022-08-23refactoringdbaranchuk
2022-08-23add memory efficient backwarddbaranchuk
2022-08-16Added fused bias to matmullt.Tim Dettmers
2022-08-08Fixed prod Python < 3.7 compatibility in function.py.Tim Dettmers
2022-08-08Removed prod for Python <= 3.7 compatibility.Tim Dettmers
2022-08-04Merge branch 'debug' into cuda-bin-switch-and-cliTim Dettmers
2022-08-04Merge branch 'extract_outliers' into debugTim Dettmers
2022-08-03Added fixes for the case that matmullt dim A is zero, e.g. [0, 768].Tim Dettmers
2022-08-01reran black with linelength 80 for greater readabilityTitus von Koeller
2022-08-01ran black and isort for coherent code formattingTitus von Koeller
2022-07-27Fixed direct extraction masking.Tim Dettmers
2022-07-26Matmullt with direct outlier extraction for 8-bit inference.Tim Dettmers
2022-07-22Most tests passing.Tim Dettmers