Image Remix on the AI Horde

The initial deployment of the Stable Cascade (SC) on the AI Horde supported just text2image workflows, but that was just a subset of what this model can do. We still needed to onboard the rest of its capabilities.

One such capability was the “image variations” option, which allows you to send an image to the model, and get a variation of that image, perhaps with extra stuff added in, using the unClip technology. This required quite a bit of work on hordelib so that it uses a completely different ComfyUI workflow but ultimately this was not so much harder than just adding the img2img capabilities to SC.

The larger difficulty came when I wanted to add the feature to remix multiple images together. The problem being that until now the AI Horde only supported sending a single source image and a single source mask, so a varying amount of images was not possible at all.

So to support this, I needed to touch all areas of the AI Horde. The AI Horde had to accept and upload each of them on my R2 bucket and provide individual download links. The SDK had to know to expect and provide methods to download those images in parallel to avoid delays, to the reGen worker had to be able to receive those images and send them to hordelib which should know how to dynamically adjust a comfyUI pipeline on-the-fly to add as many extra nodes as required.

So after 2 weeks of developing and testing, we finally have this feature available. If your Horde front-end supports the “remix” feature. You can send up to 1-6 images to this workflow along with a prompt, and it will try its best to “squash” them all together into one composition. Note that the more images you send, and the larger the prompt, the harder it will be for the model to “retain” all of them in the composition. But it will try its best.

As an example, here’s how the model remixes my own avatar. You’ll notice that the result can understand the general concepts of the image, but can’t follow it exactly as it’s not doing img2img. The blur is probably caused by the need to upscale my original image, which is something I’d like to fix on the next pass.

Likewise, this is the Haidra logo

And finally, here’s a remix of both logo and avatar together

Pretty neat, huh?

This ability to send extra source images also lays the groundwork for the Horde to support things like InstantID, which I hope I’ll be able to work on supporting soon enough.

Error Codes and Styling

Does the above image look scary? If so, you might just just be a software developer!

The above is the result of a long-time coming, but massive pull request to standardize the formatting of the AI Horde code. I’ve been meaning to do this ever since I discovered the black and ruff tools, but I’ve been procrastinating for almost as long. Well, I finally somehow got my ass in motion to do it. Including writing tests, and doing some careful regression testing, It took me like a week in total. And I still didn’t apply all of the ruff checks either.

What this means is that from now on, anyone sending a change, can simply run ruff . --fix && black . and it will automatically format all changes to match our standards. Making the code predictable to read and reducing some bad programming practices and potential tech debt.

Also, as a software dev, finally doing this kind of operation is so satisfying. Not much fun to do, but you’re very happy to have this done. What’s a good analogy for this? a peeling session (post your best analogies in the comments)?

Soon after, I also deployed another change that might be useful for AI Horde integrators out there. I have now added unique error return codes to each error message from the horde. This should make it easier to parse the various errors the horde might spit out with code, instead of having to parse an error message which might potentially change in the future. It also allows you to do things like error code translations (although I think it might be useful to allow people to send translations for the various RCs to the horde as PRs, so that we don’t force every frontend to reinvent them)

I also wrote a README page detailing all the existing RCs.

There’s also been the various bugfixes and improvements on the worker, sdk and hordelib code. Remember to update your reGen worker regularly!

Once again, many thanks to NLNet for providing the funding for such “necessary chore” tasks like these. These kind of things are not a ton of fun to do, as they don’t add any new functionality to the project, but they massively help future development by reducing tech debt.

Webhooks on the AI Horde

Today I am excited to announce that I have deployed a new feature which allows you to specify a webhook when requesting a generation on the AI Horde. If you do that, once each generation is completed on the AI Horde, it will send POST request to the specified url, with a payload matching the request type.

Apropos, it’s a good time to announce I have started writing some integration information for the AI Horde, which contains information about the available API, and SDKs, and of course, the new webhooks. Feel free to send PRs to improve it!

This new functionality can allow a few more efficient ways of using the AI Horde. For example you could avoid polling the AI Horde every second or so, and rely on webhooks, and only do a manual poll every 30 seconds or so, if the requests have not webhooked over to you yet. The AI Horde will retry a webhook 3 times before giving up, so in case of network issues etc, you can always check the status manually as usual. This approach would reduce the load on the AI horde, while at the same time giving you faster results. It’s what I call a win-win!

Of course, not all clients can support webhooks, so for those who can’t, the existing functionality will continue working as usual.

Ludicrous Speed!

One very useful feature I’ve been meaning to support for the AI Horde for a while has been request batching. Request batching is the function to generate multiple Stable Diffusion images in parallel, by using internal mechanism to the ML libraries, instead of splitting them into multiple processes. Due to the re-using common parts of the request, it allows the GPU to generate each extra image with just 20% slowdown, instead of 100%, so long as you stay within your GPU’s power.

Soon after we finished adding LCM support, I turned my view to making this a possibility, as between these two features, it could massively increase the overall speed at which the AI Horde completes requests. The only problem is the overall complexity to handling this in the Inference.

Today I’m proud to announce that the AI Horde natively supports smartly batching multiple images in the same request when possible which can result in massive improvements in overall speed! Read on for more details of how we achieved it.

By relying on ComfyUI, the most difficult part was done, and earlier work done by Tazlin and Jug had already prepared the ground to use our hordelib library to handle sending such batched requests to the comfy engine, but I still had a lot of work to do to not only allow the AI Horde accept and queue such loads properly, but also for the worker to be able to understand payloads for multiple images.

Fortunately due to the new setup of the reGen worker, being able to adjust it to accept one job for multiple images and then submit multiple image results at the end was easier than I expected. Of course doing multiple image submissions was the hardest part and I had to basically refactor that whole area of the code.

The AI Horde queuing part was not as code intensive. Making a worker pick up multiple requests when possible was not particularly hard, but not giving the worker more than it can “chew through” was. You see your worker might be able to do 1 image at 2048×2048, and it might be able to do 20 images at 512×512. However give it 20x2048x2048 and it will fall down and die! So this required a bit of fancy footwork. The way I solved this is that the worker declares how many batched images they can do along with its max resolution. The horde then assumes that the worker can safely achieve their max batching at 1/3rd of their max resolution. After this part, as the requested resolution of a job increases, the horde will smartly reduce the amount of batches from a job it will give that worker.

Practically this means that when I declare I can do 20 batches and my max resolution for one image is 2048×2048, then I will pick my full 20 images at 512×512 but will only pick 7 images at my full resolution.

Therefore the AI Horde will continue smartly slicing a request for multiple images into a number of jobs. Only this time instead of each job being 1 image, it can be multiple. Effectively this means that the horde is able to way more efficiently utilize the maximum processing power of each worker and therefore the overall performance improves!

There were a few hiccups along this development as well. For one I realized that the hordelib code did not handle batching for img2img requests at all, so I had to pull up my sleeves and jump into the way hordelib translates requests to comfyUI nodes and figure it all out. It took me a while but now that I understand this better, it will make it easier for me to add even more fancy additions to our comfy workflows!

Another somewhat important problem is that the seed returned by batched requests in comfy is not accurate. The explanation of this is a bit too technical, but at the end of the day, there is extra variable when trying to replicate an image generated via a batch, on top of the generation seed. Currently the horde will return the relevant “batch_id” in the generation metadata, which I hope in the future to use so I can add a way to replicate images from batched requests as well.

For now, if you need to ensure you can always replicate your images via a seed, the best way to do it is to request them using the new disable_batching keyword on your request. Setting this to true will make your request always split to 1 image per job, which is the way the horde used to work until now. However since disable_batching is significantly less optimal than batching, it is only available to trusted users (i.e. those who’ve been running workers for a while) and patreon supporters.

Of course you can continue manually splitting your requests to 1 image per request, but that already has increased kudos costs, and in the future this might get disincentivized further for the health of the AI Horde.

Between batching and LCM proliferation, we’re already starting to see significantly improved generation times on the AI Horde. To the point that with enough priority, you can receive 20x1024x1024 images in less than a minute! A small problem is that currently one of our most popular frontends, Artbot, defaults to manually splitting each request to 1 per image. Nevertheless, Its developer Rockbandit is already hard at work making their requests batching-compatible and once that happens, I expect the overall speed with massively improve!

LCM and multiple versions of LoRas on the AI Horde

Finally 2024 is here and this allowed me a bit of free time to work on some of my NLNet tasks. The first thing on my list to tackle was adding LCM support on the AI Horde as it provides massively reduced steps, which for a crowdsourced service like ours, it makes all the difference in how much we can deliver.

For those who don’t know, LCM is a new breakthrough in Stable Diffusion that allows to “finetune” the model in such a way where an image can be generated using 10% of the steps previously required. So an image which would require 30 steps to converge, now needs just 3! That is a massive boost for lower-range GPUs. For high range GPUs, it starts avenues such as video generation as an image can happen at millisecond speeds!

Given the benefits, I wanted to work on this as soon as possible, and given the flexibility of the FOSS GenAI technology enthusiasts, we already had a great way to use LCMs, by using LoRas to turn any SD model into an LCM version.

However there was a snag. You see while the AI Horde already supports all LoRas on CivitAI, we never supported different versions of each, as we never expected anyone would want to use more than the latest. Unfortunately people on CivitAI started using the versioning system as “alternative” versions. And the LCM LoRa was using the same approach, where there was a version for each different sampler.

So the first order of business had to be to allow the AI horde to understand and support all LoRa versions of each LoRa! This took the better part of a full work-week of development and debugging, and then another week of troubleshooting and fixing in beta.

The good news is that this lead to us also identifying and squashing a very frustrating long-running bug where workers would rarely return previous images they’d generated instead of the ones requested. Getting someone else’s image is something we definitely don’t want to ever happen so we’re very happy we figured it out.

With that out of the way, I simply had to update the AI Horde itself to be able to handle the payload for specific LoRa versions, and then add support for the LCM sampler and then some ways to urge users to switch to it.

If you’re an AI Horde integrator, we strongly suggest you change your default settings to utilize LCM LoRas in your generations. You can get them from the same API you receive the model details, under the modelVersions key. To use them, you need to send the exact version ID as a string (found in modelVersions[#]['id]) this won’t accept a version name. You will also need to set is_version: true for the LoRa payload. This will tell the worker to look for a version instead of a LoRa ID.

Sending the LoRa name or ID will continue working as usual, grabbing the latest version (modelVersions[0]) from that list, so you existing implementations should continue working as usual.

Also we recently added AlbedoXL in our model list, to provide a better baseline for SDXL generations than basic SDXL 1.0 which requires a refiner to work. Using Albedo you can get generations that do not require a refiner in your workflow at all and get much less “fuzzy” generations in the process!