diff options
author | Ben Sima <ben@bsima.me> | 2024-05-14 11:18:58 -0400 |
---|---|---|
committer | Ben Sima <ben@bsima.me> | 2024-05-20 22:15:50 -0400 |
commit | 20985f8985d810092a84f31a705144b9318235dd (patch) | |
tree | f3a8cb4c71dc77f23598b6e377cb1ed81afefca4 /Biz/Llamacpp.py | |
parent | 2d33aa547ff6a516c90ca2b47b13e2add200583a (diff) |
Test that llama-cpp is buildable
This small Llamacpp.py file is simply intended to test that llama.cpp can build.
This was previously not working, I guess, because the build system doesn't
verify that the final executable has its dependencies set properly in $PATH. Not
sure if it *should* do that verification or not.
Anyway, I rewrote this to actually test if it could call `llama`, and it could
not, because the Python builder needed the rundeps in its propagatedBuildInputs.
That alone makes `llama` available to the final artifact, but the test still
failed. This is because the wrapPythonPrograms function from nixpkgs (which adds
stuff to PATH) is called in postFixup, which happens after installPhase, but
checkPhase happens before installPhase. So I was testing a program that didn't
have PATH set yet.
Moving the test to installCheck fixed this because it runs after the postFixup
phase. I opted to keep the lint/typecheck stuff in the checkPhase because they
don't need any external dependencies, and having those fail earlier is probably
better? Maybe doesn't make a huge difference time-wise but it kinda makes the
intention clearer to be separate, in checkPhase you are checking the code
itself, in installCheck you are including the installation environment as well.
Diffstat (limited to 'Biz/Llamacpp.py')
-rw-r--r-- | Biz/Llamacpp.py | 37 |
1 files changed, 34 insertions, 3 deletions
diff --git a/Biz/Llamacpp.py b/Biz/Llamacpp.py index cd47e1e..9a2ff86 100644 --- a/Biz/Llamacpp.py +++ b/Biz/Llamacpp.py @@ -1,4 +1,35 @@ -"""Llamacpp.""" +""" +Test that llama.cpp can build and exec in the omni repo. -# : run nixos-23_11.llama-cpp -# : run nixos-23_11.openblas +Note that this does not test if llama-cpp can actually execute any models. I +(currently) use ollama for running and managing models, but I'd like to make +sure llama-cpp still works in case I need/want to switch at some point. +""" + +# : out llamacpp-test +# : run llama-cpp + +import os +import sys +import unittest + + +class TestLlamaCpp(unittest.TestCase): + """Test that llama.cpp is available.""" + + def test_in_path(self) -> None: + """Test that llama.cpp is in $PATH.""" + self.assertTrue("llama-cpp" in os.environ.get("PATH", "")) + + +def main() -> None: + """Entrypoint.""" + if sys.argv[1] == "test": + sys.argv.pop() + unittest.main() + else: + sys.exit(0) + + +if __name__ == "__main__": + main() |