The AI Horde Worker Moves to a Completely New Inference Backend

Close to a month an a half ago, our last remaining maintainer for the nataili library dropped out and we were left functional but “rudderless” as far as inference goes. We could continue operations, but we couldn’t onboard new features anymore as neither me nor any of the remaining regulars have ML knowledge.

In desperation, I asked one of our regulars, Jug, who had been helping out with some python work on the worker if he thinks it would be possible to switch to the ComfyUI software as a backend, as it had some good ideas and was modular enough to be of use to us

To my surprise Jug not only thought it was a good idea, but jumped with both legs in the deep and started hacking around to make it work. Not only that, but we managed to suck-in another regular developer in, Tazlin, who also started helping us with design best practices. As a result, the new library we started developing was built from the ground up to have extensive coverage support which will make us discover regression bugs that much easier.

First steps were to develop feature parity, and that required not only to wrangle the comfyUI pipelines to be called from nataili, but also to port features which we were using in the AI Horde Worker, such as clip, over to the comfyUI.

This early phase was were I could still provide some help, as I’m pretty good at porting features and writing tests for them, and then integrating stuff into the AI Horde Worker, but still the lion’s share of the work on hordelib was being done by Jug, with Tazlin making the code much more reliable and maintainable.

A couple of weeks in, we had almost all the features we needed, but this is where the tricky business started. First we noticed is that comfyUI was not handling multi-threading well, which make sense as it’s meant to be used by a single user on a single PC. That added massive amounts of instability, because our AI Horde Worker is using threads for everything, to nullify latency delays.

So the next phase for about two more weeks was stabilizing the thing, which required a much deeper dig into the comfyUI internals to wrangle individual processes into a multi-threaded paradigm.

Finally that was done, about 1 month after I inquired about moving to comfy. Then we discovered the next problem: Due to all the mutex locks to prevent multi-threaded instability, the whole things was now much slower than nataili was. Like significantly so!

So another two weeks were spend of figuring out where to slowdowns occurred in our implementation and tweaking things to work more optimally, and even trying to figure out if there was indeed a slowdown in the first place as comparisons with the nataili was difficult to achieve.

We even built a whole benchmark suite to see overall speeds in inference, without getting confused with HTML and model loading latency.

But beta testers were still informing us of a seemingly lower kudos reward, so then we suspected the old way of calculating kudos was not applying well to the hordelib inference, due to it working differently. For example it has no slowdown for weights, but control-net types gave out different speeds than we expected, even different speeds per control type.

To track this down, Jug trained a new Neural Network for figuring how much time a generation is expected to take, rather than try to time each individual feature. The new model was so successful at 96% accuracy, that we decided to onboard it onto the AI Horde itself, as a way to calculate kudos more accurately.

This investigation did point us to some things that worked unexpectedly within comfyUI, for example longer prompts than 77 tokens tended to be quite slower, which was a quality thing after speaking with the comfyUI devs. We did discover a workaround for the AI Horde but it’s these sort of things that are introducing unexpected slowdowns compared to before. We’re going to continue looking for and tweaking things as we discover them.

The good news is that the overall quality of images using the comfyUI branch has increased across the board. Not only that but weights not only don’t add extra slowdown (so the extra kudos cost is removed), but they can also exceed 1.3 without causing the image to distort, which is how most other UIs are using them anyway.

The big change is that images with the same payload and the same seed, will look different in comfyUI compared to nataili. This is simply due to the way inference works and something we’ll have to live with.

1.0.0

So now we have the three pillars built: Parity, Stability and Speed; it’s time to go live!

The hordelib has been bumped to 1.0.0 and the AI Horde Worker to 21.0.0. When you run update-runtime next time, you’ll automatically be switched to the new inference backend but you may need to update your bridgeData.yaml file ahead of time.

Very shortly

  1. Set the vram_to_leave_free and ram_to_leave_free to values that work for you.
  2. rename nataili_cache_home to cache_home
  3. You can delete any unused keys (like disable_voodoo)

Also as a user of the AI Horde, keep in mind that the new Workers do not yet support tiling and pix2pix

But not only if the new inference available for the AI Horde, but also for everyone else. Due to the generic way we’ve built it, any python project which needs access to image generation can now import hordelib from pypi, and get access to all the multi-threaded text2img and img2img functionality we provide!

What’s next

With the move to hordelib, we are now effectively outsourcing our inference development upstream, which allow us to get to use new developments in stable diffusion as they get on-boarded into their software. Hopefully development of ComfyUI will continue for the foreseeable future as I am really not looking forward to changing libraries again any time soon >_<

This also means that we now finally have the capability to onboard LoRas and textual inversion as well which have been requested for a long time, but we never had the capability in our backend. Likewise with new Stable Diffusion models and all the exciting new developments happening practically weekly.

It’s been a lot of hard work, but we’re coming out of it stronger than ever, thanks to the invaluable help of Jug, Tazlin and the rest of the AI Horde community!

Merging of the Hordes. The AI Horde is live!

A while back (gosh, It occurs to me this project is half a year old by now!) I took significant steps to join the two forks I had made of the AI Horde (one for Stable Diffusion and one for Kobold AI) as they diverging code was too difficult to maintain and keep up to parity with features and bug fixes I kept adding.

Then later on, I realized that my code just could not scale anymore, so I undertook a massive refactoring of the code-base to switch to an ORM approach. Due to the time criticality of that refactor (at the time, the stable horde was practically unusable due to the sheer load), I focused on getting the stable horde API up and running and disregarded KoboldAI API, as that was running stable on a different machine and didn’t have nearly as much traffic to be affected.

Once that was deployed a number of other fires had to be constantly be put out and new features on-boarded as Stable Diffusion is growing by leaps and bounds. That meant I never really had a time to onboard the KoboldAI to the ORM as well, especially since the code required refactor to allow two types of workers to exist.

Later on, I added Image Interrogation capabilities as well, which incidentally required that I set up the horde to handle multiple types of workers. This lead me to figuring out how to do ORM class inheritance (which required me figuring out polymorphic tables and other fun stuff) but it also meant that a big part of the groundwork was laid to allow me to add the text workers (which is the kind of thing that does wonder to get my ADHD brain to get over its executive dysfunction).

Since then, it’s been constantly on the back of my mind that I need to finally do the last part and merge the two hordes into a single code base. I had kept the KAI horde into a single lonely branch called KAI_DO_NOT_DELETE (because I deleted the other branch once during branch cleanup :D) and the single-core horde node running. But requests for improvements and bug fixes on the KAI horde kept coming, and the code base was so diverged by now, that it was quite a mess to even remember how to update thing properly.

The final straw is when I noticed the traffic to the KAI Horde had also increased significantly, probably due to the ease of using it through KoboldAI Lite. It was getting closer and closer to the point where the old code base would collapse under its own weight.

So it was time. I blocked my weekend off and started the 4th large refactoring of the AI horde base. The one which would allow me to use the two horde types which were mutually exclusive in the past, at the same time.

This one meant a whole new endpoint, new table polymorphism and going through all my database functions to ensure that all the data is fetched from all types of polymorphic classes.

I also wanted to make my endpoints flexible as well, so it occurred to me it would be better to to have say api/v2/workers?type=text instead of maintaining api/v2/workers/image and api/v2/workers/text independently. This in turn run into caching issues, as my cache did not recognize the query part to store independently (and I am still not sure how to do it), so I had to turn to the redis cache.

That in turn caused by bandwidth to my redis cache to skyrocket, so now I needed to implement a local redis cache on each node server as well, which required rework for my code to handle two caches at the same time. It was a cascading effect of refactoring 😀

Fortunately I managed to get it all to work, and also updated the code for the KoboldAI Client and its bridge to use the new and improved version2 of the API and just yesterday, those changes were merged.

That in turn brought me to the next question. Now that the hordes were running together, it was not anymore accurate to call it “stable horde”, or “koboldai horde”. I had already foreseen this a while ago and I had renamed my main repo to the AI Horde. But I now found the need to also serve all sorts of generative AI content from the main server. So I made the decision to deploy a new domain name. And the AI Horde was born!

I haven’t flipped all the switches needed yet, so at the moment the old https://stablehorde.net is still working, but the eventual plan is to make it simple redirect to https://aihorde.net instead.

The KAI community is happy and I’m not anymore afraid they’re going to crash and burn from a random DB corruption and they can scale along with the rest of the Horde.

Now onward to more features!

Image Interrogations are now available on the Stable Horde!

The Nataili ML backend powering the workers of the Stable Horde has for a while now supported models which can perform image interrogation (AKA img2text) operations. For example captioning images or verifying whether they are displaying NSFW content or not. For almost as long, I’ve wanted to allow the AI Horde to facilitate the widespread use of those models, the same way we do for Stable Diffusion.

A primary reason for wanting this is the fact that the requirements to run a worker on the horde are fairly heavy, needing at least a mid-range GPU on your PC and most people just don’t have the capacity to provide that. Yes there is always a chance to run generations on free cloud services like Google Colaboratory, but that replaces cost with time and attention.

So I felt that being able to use models which are fairly low-powered and can run even on CPUs would provide a way for almost everyone to join the horde and start gaining kudos for themselves. The final push I needed to do this was discovering that there was useful accessibility browser extension out there which had already ceased operations because they couldn’t find cheap compute. Which is effectively what the horde has been built to do!

I was planning to get this done 2 weeks ago, but unfortunately I got massively sick during the holidays so I couldn’t do much of anything. So I moved my vacation days to the new year and finally got cracking.

Unfortunately, while the implementation of those models is much simpler than stable diffusion, preparing the AI Horde to be able to serve these was not quite as straightforward. The problem being that until now I built the horde under two core assumptions:

  1. The input is going to include a prompt of some sort on which to run inference
  2. The prompt would always expect the same type of results. Whether that is image or text.

Image interrogations flip these lot of these on their head. The input has to be a simple image, with no prompt from the user (other than payload tweaks), and the end result can differ wildly from each other, for example one being text, the other boolean and yet another returning a dictionary.

So I needed to set up a way to do that in a way that I hadn’t engineered until now, which required building the pipeline inside the AI Horde from scratch.

To make things worse, I did not want to duplicate my worker code, something which required me to implement table polymorphism within SQLAlchemy, which is a tricky subject on its own. More importantly, it requires modifying existing tables, which meant I needed to set up a development instance of the stable horde so that I can actually test the changes before going live. That in turn meant a new server, new DB, new nodes etc. Happily I had most of it ready via my Ansible code, but I still needed to tweak things to run on a new domain etc.

Finally this also required that I implement polymorphism on the bridged worker as well. The existing worker code has evolved to use quite advanced mechanism for queuing, threading etc and I didn’t want to just duplicate it. Unfortunately the code itself has become very spaghetti and is was high time I de-indented it with extreme prejudice and then implement worker polymorphism as well.

All-in all, designing, building and testing image interrogations took me the best part of a whole week.

So I am proud to announce that the new feature is now live on the stable horde!

Response Body from an image interrogation request

As always, you have to look at the api documentation for each endpoint you want to use. But very simply, you simply send an image URL you want to interrogate and specify which interrogation forms you want to use, like so:

{
  "forms": [
    {
      "name": "caption"
    },
    {
      "name": "nsfw"
    }
  ],
  "source_image": "https://i.redd.it/ggkxrfgq7u9a1.png"
}Code language: JSON / JSON with Comments (json)

It otherwise works similar top image generation from a client’s perspective, with the difference that you don’t need to use a check/ endpoint, and you can keep polling the interrogate/status/ endpoint directly. Once a form is completed, you will get a result from that form matching its type.

Currently we support three interrogation forms: caption, nsfw, and interrogation

  • Caption: Returns a string describing the image
  • NSFW: Returns a true/false boolean depending on whether the image is displaying NSFW imagery or not.
  • Interrogation: Returns a dictionary of key words best describing the image, with an accompanying confidence score. This takes the most time of all the interrogations and is rewarded accordingly in kudos.

As I mentioned before, the worker code had to be completely refactored. It now lives in a new repository as well as the nataili repo will soon turn into a pip package I can install externally, so this is in preparation of that.

To start an interrogation worker, you use the same code, but you start with a different bridge script.

./horde-interrogation_bridge.cmd -n "The Deep Questioning" --max_threads=5 --queue_size=5Code language: JavaScript (javascript)

As the models used by the interrogation worker are much more lightweight, it actually benefits more from high threads and high queue_sizes, so feel free to crank those up so that it’s best utilizing your worker. A lot of the new code changes I did to the horde also allow your worker to pick up many forms at the same time, which will cut down on the poll requests to the horde, further reducing idle time.

However be careful not to set these too high. because you’re picking up the requests in advance, nobody else will work on them until your worker gets to them. If your queue is high and your threads are low (or slow), then you’ll notice your horde performance is not going to be great.

However the result of each form will be sent back as soon as it’s done, so as it save as much time as possible.

One more thing to note is that an Interrogation worker is different from the Stable Diffusion worker. As such you cannot use the same name! However they DO use the same bridgeData.py. If you plan to run both types of workers, utilize the command line arguments to tweak the bridge settings accordingly instead of having to change your bridgeData.py all the time.

In the future I plan to further tweak the bridge so that it can run parallel with the stable diffusion worker to best utilize your space processing. I also want to tweak the model loading so that optionally you can offload the whole thing to CPU. But I need to test if the speeds for this make sense first.

Another cool possibility from this refactor is that it opens the doors for different worker types on the horde, which in turn gives me an opening I’ve been considering for a while now, which is the complete merge of the Stable and KoboldAI horde into one service. This will reduce the amount of code juggling I have to do, and hopefully simplify things for everyone with a common kudos system.

I am excited to see what use cases you all will come up with this new system!