Age | Commit message (Collapse) | Author |
|
This is the correct way to publish commits. Previously I was doing the annoying
`git branch -f live HEAD && git push` thing. This is better because its a single
command and doing the subsequent `git sync -u` will update your local `live`
branch.
|
|
With run.sh, we can build and run the file in one go. This means we can also use
it as an interpreter in a shebang line and properly use the Unix executable bit.
This is pretty cool and gives a few advantages: running any executable file is
just `exec file.hs` or even `./file.hs`, finding all executables is `fd -t x`,
you don't need to specify or know an `out` name to run something, execution of a
program is standardized.
There is a hack to get this to work. In C and Common Lisp, `#!` is illegal
syntax, so I had to use shell syntax to invoke run.sh, call it on the current
file, and then exit the shell script. Meanwhile, run.sh takes the file and evals
the whole thing, building and running it. As long as either `//` or `;` is a
comment character in the target language, then this works. Maybe a better thing
to do would be to pre-process the file and remove the `#!` before passing it to
the C compiler, like [ryanmjacobs/c][1] and [tcc][2]? However this won't work in
Lisp because then I can't just load the file directly into the repl, so maybe
the comment hack needs to stay.
[1]: https://github.com/ryanmjacobs/c/tree/master
[2]: https://repo.or.cz/tinycc.git/blob/HEAD:/tccrun.c
|
|
I put the storybook into a new Biz.nix deploy target. The idea here is that any
Biz/* targets should be hosted by this one VM for simplicity. Over time I can
grow this as need be, but this should work to host a few services.
|
|
I need a way to reliably get a NixOS VM provisioned in the cloud, and the
easiest way to do this is to create a qcow2 image, upload it to Digital Ocean,
and use that to start a droplet. This is very much a manual process, but that's
fine, I shouldn't need to do it very often (for now).
|
|
I deleted the tests because they were overspecifying the functionality. My
mistake was to try and build out the objects and endpoints before the end-to-end
sync thing was fully working. And then I misunderstood how to do async with
HTMX, I was overcomplicating it trying to create objects and endpoints for
everything instead of just focusing on the HTML that I should be
generating. This all just led to a clusterfuck of code doing all the wrong
things in the wrong places.
So far this is much better architected. And it turns out that using image n-1
with OpenAI's create_variation function doesn't work very well anyway, so I
scrapped that too; I'll have to look into different image gen services in the
future.
|
|
These were contributed in part by gptme, thanks!
|
|
This adds the Images endpoint and related functions for loading and saving
images to the filesystem.
In the view layer, it also loads the images asynchronously using HTMX, so the
images get lazy-loaded only when they are done generating.
|
|
The cabdir was getting really large and doing a `bild **/*` was actually taking
a long time to just startup. Turns out you can configure bash to ignore certain
patterns for all globs. How convenient!
|
|
This was all dead weight, just delete it and move on.
|
|
This is basically a full rewrite. I ripped out Flask and rearchitected the whole
thing to use fully RESTful resources and endpoints using Ludic. The UI was
completely redone to use Ludic's components. I added tests for everything that I
reasonably could.
This is almost ready for an alpha launch. Before shipping it I still need to:
1. generate images using image n-1 applied to `openai.images.create_variation()`
2. write a nix service, get it on a VM somewhere, I'll probably provision a new
VM for this
3. replace the `db` thing with a real sqlite database
I only need the first one done to show it to Lia and see if she likes it, that
should be completed in a day or two. Then the nix service and deployment won't
take long at all. Setting up a sqlite database will be annoying, but that I
can't see that actually taking more than 2 days. So max 5 days out from
launching this to friends and family.
|
|
This required upgrading to python 3.12 because of some f-string format thing
that ludic uses. It's kind of annoying but the upgrade was easy enough, so I
just did it.
|
|
It's good to do this often.
|
|
This paritally used gptme to create a storybook generator. The problem I ran
into is that gptme doesn't do any architecting or considerations for
maintainable code, or even readable code, so it just wrote a long script. I
couldn't test it. Also, it didn't actually generate a 10-page story, it
generated 10 separate stories. So, I ended up writing it myself and using gptme
to fixup TODOs that I wrote along the way.
|
|
I had forgotten to add this feature, apparently, so bild --test just didn't do
the test part.
|
|
This is handy for looking at llm chat history.
|
|
I forgot to add llm to this, instead I just added the extra libraries, which
meant I had the libraries present but not the binary for running them! And llm
is important in the base dev environment because I need to experiment with the
various llms independent of my application code.
|
|
I was getting confused about what is a product and what is internal
infrastructure; I think it is good to keep those things separate. So I moved a
bunch of stuff to an Omni namespace, actually most stuff went there. Only things
that are explicitly external products are still in the Biz namespace.
|
|
Hopefully I can use this with gpgme to code remotely.
|
|
Ollama releases often, and nixos-24.05 has a very old version, it was preventing
me from doing work. Here I'm putting the newer nixos unstable as an overlay into
the 24.05 version so I can backpull versions as needed.
|
|
This domain is also expired... dang I really need to cleanup my dead projects.
|
|
Yeah I let it expire... I am basically giving up on the idea but I'll keep the
code around for now until I have time to comb through it.
|
|
Trying to enable reaper and my microphone.
|
|
So emacs can load the TAGS files automatically.
|
|
I just wanted to connect to them and needed to fix these configs.
|
|
The voyager needed some special udev rules, and a program to do the flashing. I
also made the barrier port explicit by moving to Ports.nix.
|
|
I don't want the machine to suspend after some time period, instead I want to
manually turn off the monitors.
|
|
These were just hanging around as I was doing other things. Included in this
commit are the following:
- Configure it in .ctags.d to only index the languages I use
- Add support for sources listed in Sources.json
- Fix broken link to inspekt3d repo
|
|
I'm not sure when `lib.fileset` was introduced, but it allows me to easily
create a source that can be unpacked by the `unpackPhase` which also maintains
the directory structure. This means I can pass a subset of the whole repo to the
builder, and changing an unrelated file (outside of that subset) won't cause a
rebuild of bild, which wasn't the case before.
|
|
This configures `=` to use `lint -f` as the formatter when some lines are
selected. As a shortcut, `,f` can be used to format the whole buffer.
|
|
This cuts the env script short. Making tags was taking a long time and causing
git-branchless to fail. We don't need this in CI.
|
|
This makes tags for all third-party dependencies, and uses universal-ctags
instead of fast-tags. It's so nice having tags for /everything/. It's also
pretty fast, and caches the result for third-party deps anyway.
Oh yeah this also added an explicit list of third-party Python packages. I need
to filter the entire pkgset down to just the ones I'm using in order to make
tags for them.
|
|
Needed this for some LLM projects, basically I need it whenever I want to make
embeddings because it has all of the embedding models.
|
|
I think this is the only "supported" nixos version now. But in any case scipy
seemed to be broken on the older version, and I couldn't build my code that uses
llm. Also, this allows me to get rid of the bild.os-unstable thing for
Beryllium, which was just a sitting timebomb of breaking stuff.
There are a lot of changes here because ruff updated to the very latest, and it
changed some minor lint things. Also with the new nixos I get a proper cgit
module, and some other breaking changes needed fixing.
|
|
Mynion was a prototype and while it was cool and worked well, it is unused and
causing magma to build, which takes forever. I have settled on using ollama for
local inference and a hosted inference API for production.
|
|
Otherwise it might change from underneath me (which it did already once).
|
|
I first tried to make it into a two-level fzf menu, such that selecting a
namespace dropped you into a second fzf interface that let you select the
command you want to run. I did this because, often you want to perform multiple
actions on the same namespace, rather than jump between namespaces. But this
turned out to be unwieldy, so I went back to the single fzf with bindings for
actions, and I just cleaned up the code and tried to make everything easier to
read or use.
|
|
I basically never want to not fix my code from here. Presumably if I'm invoking
lint from ns.sh, then I don't have the file open yet, or I just closed it. So
it's not super helpful to show me a bunch of things I need to fix and then tell
me that I could autofix them but don't actually fix them for me.
|
|
This shouldn't be polluting the main namespace.
|
|
It's just a simple test, but it effectively tests that the `CustomRepl` can be
instantiated for the ns and path. I also copied the unittest recipes I came up
with in the other place I used Python tests so far: Biz/Llamacpp.py.
Also, I'm beginning to see how a Biz/Cli.py module might work. Probably just a
simple abstract base class with move, test, help, and tidy methods, pretty
similar to the Haskell version.
|
|
I think gitlint violations were sneaking through without this.
|
|
This adds a few things I found from [this gist][1], but cleaned up quite a bit I
think, and designed a bit closer to the ghci user experience.
Along the way I figured out what ruff settings will autoformat my imports in one
alphabetized section, rather than splitting it into multiple sections for
builtins and external deps. So I made that change in the whole repo, but there
weren't too many changes.
[1]: https://gist.github.com/aliles/1153926
|
|
This is meant to emulate the `:e` command in ghci.
|
|
The regex would fail if there was text after the `import X`, so `import X as Y`
would fail, and bild wouldn't detect the local import. This adds a simple test
that guards against this regex failure.
|
|
- ns.sh was broken because I never updated the commands to include the .sh
extensions
- changed a few other UI things in ns.sh to support my workflow
- run.sh was broken when the exe name had a hyphen in it, so just use bild
--plan instead of regex
- changed the noqa to include the English name of the ignored rule because
lets be real I'll never remember what A003 means
|
|
Pretty sure I let this expire.
|
|
Default is 3, which is too low, I keep banning myself.
|
|
At some point I guess the activation script changed to no longer include the
domain, so I have to get it form the 'domainname' systemd service.
Also I switched to sshing as $USER instead of root, which just seems like better
security practice. I originally used root because I thought you had to be root
to do nix system stuff, but it turns out that's not the case.
|
|
It's supposedly faster and actually maintained. Also make preview window in
ftags.sh a bit more robust.
|
|
This small Llamacpp.py file is simply intended to test that llama.cpp can build.
This was previously not working, I guess, because the build system doesn't
verify that the final executable has its dependencies set properly in $PATH. Not
sure if it *should* do that verification or not.
Anyway, I rewrote this to actually test if it could call `llama`, and it could
not, because the Python builder needed the rundeps in its propagatedBuildInputs.
That alone makes `llama` available to the final artifact, but the test still
failed. This is because the wrapPythonPrograms function from nixpkgs (which adds
stuff to PATH) is called in postFixup, which happens after installPhase, but
checkPhase happens before installPhase. So I was testing a program that didn't
have PATH set yet.
Moving the test to installCheck fixed this because it runs after the postFixup
phase. I opted to keep the lint/typecheck stuff in the checkPhase because they
don't need any external dependencies, and having those fail earlier is probably
better? Maybe doesn't make a huge difference time-wise but it kinda makes the
intention clearer to be separate, in checkPhase you are checking the code
itself, in installCheck you are including the installation environment as well.
|
|
This is basically exactly the client library that I would write myself. Some
parts of it are still beta quality, but it's the sort of thing that I would
contribute to anyway.
Unfortunately I couldn't get the llm-llama-cpp plugin to work because it depends
on llama-cpp-python which is not packaged for nix and is hard to package because
the upstream project vendors a patched version of llama.cpp. So I'm stuck with
ollama for now, but that's fine because it actually works.
|