Stable Diffusion XL Beta on the AI Horde!

In the past month or so, I’ve been collaborating with stability.ai to help them improve the quality of the new upcoming SDXL model by providing expertise around automation and efficiency in quickly testing new iterations of their model using the the AI Horde as middleware.

Today I’m proud to announce that we have arranged with stability.ai that the new SDXL beta will become available on the AI Horde directly, in a form that will be used to gather data for further improvements of the model!

You will notice that a new model is available on the AI horde: SDXL_beta::stability.ai#6901. This is the SDXL running on compute from stability.ai directly. Anyone with an account on the AI Horde can now opt to use this model! However it works a bit differently then usual.

First of all, this model will always return 2 images, regardless of how many you requested. The reason being is that each image is created using a different inference pipeline and we want you to tell us which you think is best! As a result, the images you will create with this model are always shared! This means that the images you generate will be stored and then used for further improvement based on the ratings you provide.

Which ratings? Well each time you generate images with SDXL you can immediately rate them for a hefty kudos rebate! Assuming your client supports it, you should always report back which image is better than the two. You can then optionally also rate each of them for aesthetic ratings (how much you subjectively like them) and for artifacts (how ruined the image is from generation imperfections like multiple fingers etc)

Lucid Creations has already been adjusted to support this!

For best results with SDXL, you should generate images of 1024×1024 as that is its native resolution. The minimum allowed resolution for SDXL has therefore been adjusted to this.

Also, as I mentioned before, this beta is primarily going to be used to improve the model, therefore we’re disallowing anonymous usage on this model for the moment. However registering an account is trivial and free and will also give you more priority in your generations!

The most cool factor is that as stability.ai further improves the model, is will automatically become part of this test, without you having to do anything! As such, you will eventually start getting better and better generations from the SDXL_beta on the AI horde. This is why you continued sharing of the quality is very important!

So please go forth and use the new model and let us know how it performs!

State of the AI Horde – July 2023

It’s high time I wrote one more of these posts to keep everyone up to date. It’s been generally a fairly slow month as far as the Horde is concerned. That’s not tot to mean that we produced less content, but rather that there hasn’t been a lot of progress on features as all the developers appear to have been busy with other projects a lot.

(In case youโ€™re new here and do not know what is the AI Horde: It is a crowdsourced free/libre software Open API which connects Generative AI clients to volunteer workers which provide the inference.)

LoRas in the mix

Since the last State of the AI Horde, we saw the introduction of LoRas into payloads allowing everyone access to all LoRas in CivitAI at the click of a button. I have likewise been recording the amount of times a lora is being used, storing it by its CivitAI ID. Below you can see the top25 LoRas used since we started recording them. You can check which one it is by adding the number to this URL https://civitai.com/models/<LORA ID>. Using this method we can see that the Add More Details LoRa is clearly one of the most valuable one following closely by Details Tweaker. People do love adding more details!

All in all, a total 801.326 LoRas have been recorded being used successfully in the AI Horde!

 lora  |     count 
-------+-----------
 82098 |     46095
 58390 |     36222
 48139 |     18042
 60724 |     16980
 13941 |      9245
 32827 |      9191
 87245 |      9083
 43814 |      8999
 12820 |      8148
 28742 |      8009
 9652  |      7960
 82946 |      7922
 25995 |      7671
 28511 |      7225
 87080 |      5966
 16928 |      5950
 6693  |      5431
 24583 |      5121
 48299 |      4905
 42214 |      4838
 63278 |      4821
 9025  |      4105
 9651  |      3828
 10816 |      3786
 37006 |      3759

Image Generation Stats

In image news, our usage remains fairly stable, which is fairly impressive if one considers just how much extra slowdown is added by all these LoRa. We are stable at ~300K images generated per day and ~1M images per month. Worth noting that since it started the AI Horde has generated close to 1 whole PETApixestep for free and ~70 million images!

On the model side, Deliberate solidifies itself furher as the best generalist model, while Stable Diffusion drops down to 3rd place as the Anime takes the 2nd spot. Our own special Kiwi, ResidenChief’s model seems to have been a massive success as well, coming out of nowhere to grab a solid 4th place. And this time the Furries are also in forth, capturing the 6th position! Pretty cool stuff!

  1. Deliberate 31.6% (3127771)
  2. Anything Diffusion 11.4% (1133058)
  3. stable_diffusion 9.6% (951705)
  4. ICBINP – I Can’t Believe It’s Not Photography 6.7% (662205)
  5. Dreamshaper 5.6% (550335)
  6. BB95 Furry Mix 3.7% (370853)
  7. Hentai Diffusion 3.1% (303351)
  8. Counterfeit 2.6% (254197)
  9. ChilloutMix 1.9% (186525)
  10. Pretty 2.5D 1.8% (178713)

Text Generation Stats

On the text side, not much has changed since May, with out generation staying similat at ~3M requests fulfilled and 377 Megatokens.

Likewise on the model top 10 Pygmalion 6B is still leading the pack with Mr. Seeker’s Erebus and Nerybus still in heavy use.

  1. PygmalionAI/pygmalion-6b 36.4% (1118193)
  2. KoboldAI/OPT-13B-Erebus 9.2% (284531)
  3. KoboldAI/OPT-13B-Nerybus-Mix 6.4% (197689)
  4. Pygmalion-7b-4bit-GPTQ-Safetensors 4.8% (148631)
  5. VicUnlocked-alpaca-65b-4bit 3.8% (116034)
  6. chronos-hermes-13B-GPTQ 3.4% (105727)
  7. KoboldAI/OPT-30B-Erebus 3.2% (99782)
  8. manticore-13b-chat-pyg-GPTQ 2.7% (82276)
  9. 13B-HyperMantis/GPTQ_4bit-128g 2.3% (69403)
  10. asmodeus 2.0% (61863)

Hordelib is back!

A month ago, I mentioned that hlky sent a bogus DMCA against hordelib which we fought against. I am glad to announce that this process finally completed and the hordelib is once again visible in Github and all contributions by hlky have been purged. I honestly hope that’s the last I’ll hear about this person…

Lemmy and Reddit

The main reason for being otherwise busy is that I’ve been furiously transferring my Reddit presence to my own self-hosted Lemmy instance, because Reddit is speed-running enshittification. I won’t bore you with the details but you can read up some of my work and see some of my development in the relevant blog tag.

However I do want to say that the instance I’ve fired up, the Divisions by zero has been more successful than I ever could have imagined! With ~10K users registered and thousands of subscribers to the communities, and some of the best admin team I could hope for.

Unfortunately there’s also been reddit drama which has been mightily distracting to me, but things are slowly settling down and I am putting my reddit days well behind me.

In relevance to the AI Horde however, we do have some cool communities you should subscribe to

I am likewise already planning more events and automation to more closely tie the AI Horde into the Lemmy instance for cool art stuff! Stay tuned and/or throw me your ideas!

Prompt Challenges

R from our mod team has started running some cool prompt challenges in the discord server which you are all more than welcome to join! Winner gets a nice bundle of kudos, not to mention the amount you get by simply posting. It’s just fun all around, and the winners are featured in the Lemmy communities as well!

Worker Updates

Tazlin has been hard at work improving the AI Horde Worker with bugfixes (not to menton the huge amounts of tech support given in discord). The AI Horde Worker as a result has become much much more stable which should have a good impact on your kudos-per-hour! Just a quick shout-out to an invaluable collaborator!

A Lot more workers

I don’t know how it’s happening but the AI Horde is nearing 100 dreamers! I am getting the fireworks ready for the first time we hit this threshold!

Funding and Support

My current infrastructure has been sufficiently stable since the last migration to a dedicated host which I think you have experienced with the low amount of downtimes and interruptions since.

This is my usual opportunity to point out that running, improving and maintaining the the AI Horde is basically a full-time job for me so please if you see the open commons potential of this service, consider subscribing as my patron where I reward people with monthly priority for their support.

If you prefer, you can instead sponsor me on github.

Final Word

While development slowed significantly in June, we’re still doing significant work for the open commons. I have just not had the mental capacity to build up hype as much as I used to, and to make it worse, the social media landscape is completely in the air at the moment.

I am really hoping more people can step up and help promote the AI Horde and what it represents as my workload is just through the roof and to be perfectly honest, I am at the limit of my “plate-spinning” capabilities.

Please talk about the AI Horde and the tools in its ecosystem. The more people who know about it, the more valuable it becomes for the benefit of everyone!

We have plenty of ways one can help and we shower with kudos everyone doing so. From people sharing images and helping others in the community, to developers bug-fixing my terrible code, to community managers on discord and admins on lemmy. If you want to help out, let us know!

Client and Bot Updates

Lucid Creations

Lucid Creations has gotten a few more updates, with the most recent being the ability to load your generation settings from a previous generation you have saved. Plus a few other quality of life fixes

More Styles

The AI Horde stylelist has received Lora support and about 40 new ones have been added using the awesome work by konyconi. Likewise a new category called “stylepunks” has been added which will generate using random styles from those new ones.

Mastodon & Reddit Bots

The Mastodon and Reddit bots have been updated to support LoRa styles

Discord Bot

The official AI Horde discord bot has also been updated by ZeldaFan to support the new LoRa styles, but no categories support yet. Hopefully soon.

Other Clients

A ton of other clients are receiving flood of updates. Particularly ArtBot and aislingeach are both seeing very rapid development. Remember to check their individual discord channels if you haven’t checked them out yet

Hordelib repo is temporarily down due to DMCA

As I mentioned last time, we received a DMCA take-down for a couple of missing attributions on AGPL3 files. We have since added those back, but as is the case with DMCAs, once you start them, they don’t stop on their own.

We had already send a counter-notice, but unfortunately not fast enough. The Hordelib repo has now been hidden from the world. You can read the whole bogus DMCA requesting it here.

I remind that hlky had made no attempt to request those attributions informally, just went straight to DMCA, which tracks since this person is always acting in bad faith. It just in this case it’s a convenient way to attack the AI Horde project once more.

This is a minor inconvenience, as the hordelib is still available in pypi, but annoying nonetheless.

Hopefully we’ll be back soon enough, and until then we’ll just re-host elsewhere for a while.

The AI Horde now seamlessly provides all CivitAI LoRas

Demand for AI Horde to support LoRas was high ever since they were discovered and unleashed into the Stable Diffusion ecosystem. About 2 months ago we were really close to releasing support for them but alas “stuff happened” which culminated in us having to completely rewrite out inference backend.

Fortunately, ComfyUI already supported LoRas, so once we switched to using it as our inference library, we just needed to hook into its LoRa capabilities. Myself along with the extremely skilled hordelib devs, Jug and Tazlin have been working constantly on this for the past half a month or so and today I’m proud to announce that the Lora capability has been released to all AI Horde Workers!

You might have seen earlier that some Client like Artbot, the Krita plugin and Lucid Creations were already supported LoRas. This was however only using the few beta testing workers we had. Now those clients will start seeing a lot more speed as worker with the power to do so start running them.

This also signifies our tightest integration with CivitAI yet! We we were already utilizing them, but mostly to just pull stable diffusion checkpoint files. The LoRa integration however goes far far further than that, using them as an integral part of our discovery and control process.

Through the very useful REST API provided by CivitAI, we have developed a method where we can discover, download and utilize any LoRa that is hosted there. No more do you need to search, validate compatibility, download etc. The AI Horde and its Worker will handle that automatically!

To showcase this point, I’ve made a small video showing off this capability in Lucid Creations which I’ve tightly integrated into CivitAI, so that it can look for any LoRa based on ID or partial name, display all relevant information and even warns you of compatibility issues.

Sorry about the sound quality. I don’t have a professional streamer setup, just a shitty webcam ๐Ÿ˜€

I’ve put a lot of effort into making finding and using LoRas as painless as possible, because there’s literally dozens of thousands of the things. Manually searching through the website and copy-pasting IDs is just a complete PITA. Never mind downloading and trying them out one-by-one.

Through the AI Horde, you can simply type the partial name of what you want, and keep mix and matching them until you achieve the desired outcome! Changing and trying new LoRas now takes literally a few seconds!

Also many kudos to the CivitAI devs which quickly implemented one request I put through to make this even easier and faster for everyone.

As this has just been released, currently only a few UIs support this. Lucid Creations which has full discovery, and ArtBot which requires you to know the ID. The Krita plugin is also supporting them. I expect the other UIs to soon follow in the coming days.

Hopefully this is another step in unleashing your creativity, without requiring a powerful GPU of your own!

Stay tuned for the next update adding access to all Textual Inversions as well.

Lucid Creations is the first client to support LoRa on the AI Horde!

I don’t have time to keep Lucid Creations well updated but sometimes I just need to show others how it’s done!

So I managed to the first UI to support LoRa on the AI Horde!

In case you don’t know what LoRa are, the short version is nothing less than a breakthrough in Generative AI technology, by allowing to condense the time and power a training (or “fine-tuning”) needs!

This kind of breakthrough, along with the achievements done by new models such Stable Diffusion and Llama is what is causing the big tech companies to scramble, to the point that OpenAI had to run to the government to ask them to regulate future breakthroughs (but please, please don’t regulate what they’re currently doing, OK?)

But I digress. I haven’t yet officially announced LoRa support on the AI Horde yet, as we’re still trying to squash all bugs on the workers, but I hope the addition to Lucid Creations will help other integrators see how it can be done

Epic AI Horde Documentation Update

Today I start by wanting to do some work around the upcoming LoRas functionality for the AI Horde, however I first wanted to update my AI Horde README a bit, as it was still referring things like the KoboldAI Horde (which has been merged into the AI Horde since a while now).

As I was finishing it, I noticed that the written FAQ had also grown a bit stale, and there’s a few new points I feel I need to make absolutely clear, such as kudos sales being forbidden, or how the kudos calculation being handled by a new NN model etc.

It quickly dawned on me however that there’s a ton of Horde terminology I’m using in the FAQ, which someone might not understand. Now we have Dreamers, Scribes and Alchemists, Workers and Bridges, Integrations, Frontends, Plugins and Bots etc. Quite a few things for someone unfamiliar

So I decided to try writing some cross-referenced terminology in the context of the AI Horde, but the more I kept writing, the more terms I realized I needed to define as well. And once I started pulling on this thread…well, let’s just say, that I spend a bit more time than I planned on documentation today and now the Terminology wiki page is available which is extensively cross-referenced inside itself, and to Wikipedia as needed.

I honestly had to forcefully stop myself on expanding it, because I kept thinking of more things to define. So if you’re the kind of person who likes doing this sort of thing as well, feel free to improve further!

State of the AI Horde – May 2023

I’ve been meaning to write one of these posts every month, but the events since I wrote the last piece have been fairly disruptive. With the loss of our maintainers of nataili, we’ve been forced to put our heads down to alleviate the backend issue, and I didn’t want to write another “State of the AI Horde” until that business was completed. More about that later. First, let’s look at the basics.

(In case you’re new here and do not know what is the AI Horde: It is a crowdsourced free/libre software Open API which connects Generative AI clients to volunteer workers which provide the inference.)

More Requests, More Power!

The total amount of images generated has stayed relatively stable since the last month, with only ~12M images generated. However the total amount of Terapixelsteps is up to 4.2TPS compared to 3.7TPS last month. This shows that people are looking for more details at higher resolution instead of just grabbing more images. Unfortunately we can’t capture the impact of ControlNet as easily, but suffice to say, its demand is pretty significant.

On the LLM front, we’ve generated 3.5M texts, for a total of 374 Megatokens. We have effectively tripled the LLM output! This makes sense as we see plenty of traction in the LLM communities since Google Colab started banning Pygmalion.

Also, did you know Cubox from our discord community has gone and setup a whole public Grafana server to track our stats? Head over and check it out yourselves.

Top 10 Stable Diffusion models

No significant changes since last month in this chart, with Deliberate still proving its worth and further solidifying its position with a 25% use rate throughout the entire month. Stable Diffusion 1.5 sees plenty of use too.

Anime models seem to be losing some popularity across the board, and Dreamshaper kicked Abyss OrangeMix off the board so as to settle on a decent 5%. Good showing!

  1. Deliberate 25.8% (3067246)
  2. stable_diffusion 14.4% (1707729)
  3. Anything Diffusion 9.0% (1067386)
  4. Dreamshaper 5.0% (596651)
  5. Realistic Vision 4.7% (563079)
  6. URPM 3.6% (428415)
  7. Hentai Diffusion 2.9% (348985)
  8. ChilloutMix 2.6% (314301)
  9. Project Unreal Engine 5 2.6% (307907)
  10. Counterfeit 2.2% (260739)

Top 10 Text models

As is to be expected, Pygmalion 6b is still leading the charts with a significant margin, but its big brother, the recently released Pygmalion 7b, which is based on the Llama models, has already secured 3rd place and is only set to further cannibalize 3b’s position.

Erebus and Nerybus continue to fill up the rest of the board with a few new 4bit models starting to finally come into the fray.

  1. PygmalionAI/pygmalion-6b 44.5% (1571109)
  2. KoboldAI/OPT-13B-Erebus 9.8% (345983)
  3. PygmalionAI/pygmalion-7b 5.2% (183413)
  4. KoboldAI/OPT-2.7B-Nerybus-Mix 3.8% (135223)
  5. KoboldAI/OPT-13B-Nerybus-Mix 3.8% (132564)
  6. bittensor/subnet1 2.5% (86859)
  7. Pygmalion-7b-4bit-32g-GPTQ-Safetensors 2.3% (79577)
  8. gpt4-x-alpaca-13b-native-4bit-128g 2.0% (72358)
  9. pygmalion-6b-gptq-4bit 1.8% (64750)
  10. OPT-6.7B-Nerybus-Mix 1.8% (62300)

Image Ratings keep flowing in

Adding new ratings continue unabated but we’re still finding people trying to bypass our countermeasures and just automatically rate images to poison the dataset. Please don’t do that. If you want kudos, you can easily get more by simply asking in our discord instead of wasting our time ๐Ÿ™

One interesting note is that the Stable UI has finally bypassed the Artbot on amount of images rated

 count  |        client        
--------+----------------------
   6270 | AAAI UI
 110826 | ArtBot
 279112 | Stable UI
   3197 | Unknown
   2700 | ZeldaFan Discord Bot

A total of 400K ratings over the past month. Very impressive.

I have also finally on-boarded the full DiffusionDB dataset of 12M images, into the ratings DB, we we’ll have enough images to rate for the foreseeable future.

A whole new backend

As I mentioned at the start, the big reason this state of the AI horde was delayed, was the need to switch to a completely new backend. It’s a big story, so if you want to read up on some details on that, do check out the devlog about it.

New Features and tweaks

Not much to announce since all our effort was in the backend switch, but a few things nevertheless

  • We have added support for Shared Keys, so that you can share your account priority, without risk.
  • A new kudos calculation model has been put live, instead of the manual way I was calculating kudos for each request based on magic numbers. The new kudos model is a Neural Network trained by Jug, and it takes into account the whole payload and figures out the time it would take to generate empirically.
  • Kudos consumption has been removed for weights, and a tiny kudos tax of 1 kudos has been added per request to discourage splitting the same prompt into multiple separate requests.
  • You can now change your worker whitelist into a worker blacklist so that you avoid specific workers, instead of requesting explicitly which ones you want.
  • A new “Customizer” role has been added to allow you to host custom Stable Diffusion models. This is not possible on the Worker yet, but once it is, these people will be able to do it. Getting this role is a ping to the AI Horde moderators.
  • We had to fight back another raid, on the LLM-side this time, which forced me to implement some more countermeasures. Scribes (i.e. LLM workers) cannot connect through VPN unless trusted, untrusted users can only onboard 3 workers, and there’s an worker-limit per IP now.

Discord Updates

I’ve been doing some work to improve the social aspects in our discord server, one of them is onboarding 3 new volunteer community managers through and hoping we can start doing more events and interactions with the community.

Another is the addition of a new channel per Integration I know of. I give the developers/maintainers of these services admin access to that channel so they can use it to provide support to their users, or redirect to their own discord servers if they prefer.

If you have a service integrating into the AI Horde API, please do let me know and I’ll be happy to open a new channel for you and give you admin access to it.

Infrastructure Improvements

Our demand is increasing significantly, as we have way more and more concurrent web sessions every day. Unfortunately a few database downtimes this month convinced me that hosting it in a Contabo VM with 30% CPU steal is not feasible anymore.

So I finally finalized switching the Database to a dedicated server. It’s a lot more expensive but massively worth it as our response times have increased 5-fold! A worker payload pop that used to take 1-3 seconds, now can take 0.2 – 1 second! As a result the whole AI horde should feel way snappier now.

I have also added our first dedicated box API front-end. Initially it didn’t look to be doing too good, having some of the worst performance, but once I switched to the dedicated database server, it suddenly became the fastest by far, making very obvious just how much impact latency has.

I have also finally deployed a central log server based on Graylog, which should help me track issues across all servers and look them up historically as well.

Funding and Support

All of the above now means the AI horde is significantly more expensive to run and it’s almost a second full-time job for me at this point. Currently all of this is funded via my patreon subscribers only but that is not scaling quite as fast as my infrastructure costs :-/

To add to this, the stability.ai support seems to have run dry and the GPUs they were providing to the AI Horde have been removed and I haven’t been able to arrange to bring them up again.

So I think I need to more consistently promote the way to help me sustain the AI Horde as a truly free and open API for everyone.

So, please if you see the potential of this service, consider subscribing as my patron where I reward people with monthly priority for their support.

If you prefer, you can instead sponsor me on github.

I am also looking for business sponsors who either see the value of a Crowdsourced Generative AI Inference REST API or might want to use it for themselves. I basically have no contacts in the AI world so I would appreciate people forwarding this info to whoever might be interested.

Final Word

The last month has been very difficult for the AI Horde but fortunately I’ve had the help of some really valuable contributors from both the AI Horde community as well as the KoboldAI community and we managed to pull through.

Now with our new backend I expect new features to come much faster and of more quality. We’re already working hard on LoRa support horde-wide for one, so stay tuned!

The AI Horde Worker Moves to a Completely New Inference Backend

Close to a month an a half ago, our last remaining maintainer for the nataili library dropped out and we were left functional but “rudderless” as far as inference goes. We could continue operations, but we couldn’t onboard new features anymore as neither me nor any of the remaining regulars have ML knowledge.

In desperation, I asked one of our regulars, Jug, who had been helping out with some python work on the worker if he thinks it would be possible to switch to the ComfyUI software as a backend, as it had some good ideas and was modular enough to be of use to us

To my surprise Jug not only thought it was a good idea, but jumped with both legs in the deep and started hacking around to make it work. Not only that, but we managed to suck-in another regular developer in, Tazlin, who also started helping us with design best practices. As a result, the new library we started developing was built from the ground up to have extensive coverage support which will make us discover regression bugs that much easier.

First steps were to develop feature parity, and that required not only to wrangle the comfyUI pipelines to be called from nataili, but also to port features which we were using in the AI Horde Worker, such as clip, over to the comfyUI.

This early phase was were I could still provide some help, as I’m pretty good at porting features and writing tests for them, and then integrating stuff into the AI Horde Worker, but still the lion’s share of the work on hordelib was being done by Jug, with Tazlin making the code much more reliable and maintainable.

A couple of weeks in, we had almost all the features we needed, but this is where the tricky business started. First we noticed is that comfyUI was not handling multi-threading well, which make sense as it’s meant to be used by a single user on a single PC. That added massive amounts of instability, because our AI Horde Worker is using threads for everything, to nullify latency delays.

So the next phase for about two more weeks was stabilizing the thing, which required a much deeper dig into the comfyUI internals to wrangle individual processes into a multi-threaded paradigm.

Finally that was done, about 1 month after I inquired about moving to comfy. Then we discovered the next problem: Due to all the mutex locks to prevent multi-threaded instability, the whole things was now much slower than nataili was. Like significantly so!

So another two weeks were spend of figuring out where to slowdowns occurred in our implementation and tweaking things to work more optimally, and even trying to figure out if there was indeed a slowdown in the first place as comparisons with the nataili was difficult to achieve.

We even built a whole benchmark suite to see overall speeds in inference, without getting confused with HTML and model loading latency.

But beta testers were still informing us of a seemingly lower kudos reward, so then we suspected the old way of calculating kudos was not applying well to the hordelib inference, due to it working differently. For example it has no slowdown for weights, but control-net types gave out different speeds than we expected, even different speeds per control type.

To track this down, Jug trained a new Neural Network for figuring how much time a generation is expected to take, rather than try to time each individual feature. The new model was so successful at 96% accuracy, that we decided to onboard it onto the AI Horde itself, as a way to calculate kudos more accurately.

This investigation did point us to some things that worked unexpectedly within comfyUI, for example longer prompts than 77 tokens tended to be quite slower, which was a quality thing after speaking with the comfyUI devs. We did discover a workaround for the AI Horde but it’s these sort of things that are introducing unexpected slowdowns compared to before. We’re going to continue looking for and tweaking things as we discover them.

The good news is that the overall quality of images using the comfyUI branch has increased across the board. Not only that but weights not only don’t add extra slowdown (so the extra kudos cost is removed), but they can also exceed 1.3 without causing the image to distort, which is how most other UIs are using them anyway.

The big change is that images with the same payload and the same seed, will look different in comfyUI compared to nataili. This is simply due to the way inference works and something we’ll have to live with.

1.0.0

So now we have the three pillars built: Parity, Stability and Speed; it’s time to go live!

The hordelib has been bumped to 1.0.0 and the AI Horde Worker to 21.0.0. When you run update-runtime next time, you’ll automatically be switched to the new inference backend but you may need to update your bridgeData.yaml file ahead of time.

Very shortly

  1. Set the vram_to_leave_free and ram_to_leave_free to values that work for you.
  2. rename nataili_cache_home to cache_home
  3. You can delete any unused keys (like disable_voodoo)

Also as a user of the AI Horde, keep in mind that the new Workers do not yet support tiling and pix2pix

But not only if the new inference available for the AI Horde, but also for everyone else. Due to the generic way we’ve built it, any python project which needs access to image generation can now import hordelib from pypi, and get access to all the multi-threaded text2img and img2img functionality we provide!

What’s next

With the move to hordelib, we are now effectively outsourcing our inference development upstream, which allow us to get to use new developments in stable diffusion as they get on-boarded into their software. Hopefully development of ComfyUI will continue for the foreseeable future as I am really not looking forward to changing libraries again any time soon >_<

This also means that we now finally have the capability to onboard LoRas and textual inversion as well which have been requested for a long time, but we never had the capability in our backend. Likewise with new Stable Diffusion models and all the exciting new developments happening practically weekly.

It’s been a lot of hard work, but we’re coming out of it stronger than ever, thanks to the invaluable help of Jug, Tazlin and the rest of the AI Horde community!

Key Sharing

The AI Horde is built around the concept of Mutual Aid, to allow people who have, to aid with those who have not. It’s just that it is about aiding for one specific purpose, of using generative AI.

A lot of the design decisions of the AI Horde have been added to facilitate this purpose, such as kudos transfers, which have in turn been turned into things like discord emojis etc. And I’m always looking for ways to reinforce this behavior.

To this end, I am excited to announce a new feature on the AI Horde: Shared Keys

What are shared keys?

In short, they are API keys which can only be used to generate images and text, and not valid for doing any other operations, such as transferring kudos or rating images. The idea here is that someone can created a shared key to give to friends and family, to allow them to use their account priority and to lower the on-boarding requirements of registering their own accounts and not worrying that they might leak it.

Whenever a shared key is used, the kudos is consumed from the origin account and the priority used for that generation is the same as the owner’s. The generation also shares concurrency with that account so if you are planning to share with a lot of people they might end up getting in each other’s way.

Shared Keys can also be given an optional kudos limit, and an expiry date, after which they stop working. A kudos limit doesn’t affect their priority, just prevents the shared key form being used once that limit has been reached.

How do I create an API key?

Until UIs add the option to create them, the simplest way is to use the API web interface directly: http://aihorde.net

Alternatively you can open a console terminal and send a CURL call like so:

curl -X 'PUT' \
  'https://aihorde.net/api/v2/sharedkeys' \
  -H 'accept: application/json' \
  -H 'apikey: YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
  "kudos": 10000,
  "name": "Mutual Aid"
}'Code language: PHP (php)

just add your own API Key and change the kudos limit and name as you wish. You can also set kudos to -1 to allow unlimited sharing with that key.

We also provide an endpoint to check how much a key has been utilized until now

{
  "id": "4cb776de-31f0-4895-9fc3-b2e1d17a64f0",
  "username": "db0#1",
  "kudos": 2320,
  "utilized": 7684
}Code language: JSON / JSON with Comments (json)

How do I use a Shared Key

Simply use it place of a normal API key to the UI of your choice.

Can I modify a Shared key

Yes, you can both “top-up” existing keys, add/remove expiry, or delete them altogether.

What’s next?

The Shared Keys are designed to be pretty open in their usage. I expect use cases around “service accounts” for communities where people are pooling their kudos somehow, but I am also curious what other emergent uses people will come up around this system.

And If you have a user-case which requires tweaking of this functionality, do let me know!