Reddit worked despite reddit.

I visit Reddit all the time. And I visited Digg before that. In fact I was hooked to this mode of operation since Digg. Suffice to say, something about link aggregation tickles my ADHD brain just right.

However with the recent blackout of a big part of reddit, I decided to start my own Lemmy instance and use Lemmy primarily instead through it. Since I’ve started this experiment, I feel any urge to visit Reddit for my “fix” less and less. I have some thoughts about that.

In Twitter Vs Mastodon AKA “micro-blogging”, The value was in the specific people one followed which made it way harder to switch services because one was help back by other people. I.e. the people kept each other locked-in. Similar to how Facebook keeps everyone locked-in their walled garden because it’s the only social media their parents and grandparents managed to learn to use.

In Reddit however, the value is all about the specific forums, or “subreddits”, in lingo. The specific people one was talking to, never really mattered. What was important was the overall engagement general sense of shared-interest. This has always been the core strength of Reddit, and its early pioneers like Aaron Schwartz understood that.

This is why the minimalist reddit of old, managed to dethrone Digg when the latter decided that its core principles wasn’t user-curated content, but linkspam. The people who migrated into Reddit made what it is today, by creating and nurturing their communities over years.

Any beneficial actions by reddit itself have been either following what the community was already doing (such as adding CSS options or on-boarding the automoderator bot), or forced by bad optics, such as when they were forced to finally ban /r/coontown, /r/fatpeoplehate, /r/jailbait (which their current CEO moderated btw) etc.

The community and the people who run the subreddits have always had to make the minimalist options allowed to them work. They had to develop their own tools and enhancements, such as RES, and Moderator Toolbox, while Reddit couldn’t even provide much requested functionality to counter the known abuses of cross-subreddit raiding.

Instead, Reddit focused on adding useless features nobody asked for like NFT. On the usability, the new look was their push to take the site more towards generic social media network, with friends, follows, awards and avatars, and instead of focusing on their core product: Link aggregation and discussions.

In fact, any action they took, was laser focused on social-media lock-in and extracting wealth and adding features which people didn’t care for, which is why most third party apps simply ignored all that stuff.

Through all this, their valuable communities kept fighting against reddit management’s pushes so that they could do what was right, even if some lost that fight, like /r/AMA which became but a shadow of its former self when the cowardly owners fired their low-level employee leading its success, and scapegoated their then female CEO for it.

Eventually though something had to give, and reddit seems to have realized that their users are too stubborn to simply accept the new paradigm they designed for them where they watch more ads, buy more reddit gold and get addicted to NFTs. And 3rd party apps enabled users to use the valuable part of reddit and skip the enshittification all too easily.

So they had to go. And here we are.

Unfortunately for reddit, since the core value of reddit has always been the links, and the discussions around said links, instead of specific people and a social network around them, it is stunningly easy to jump ship. It doesn’t take a lot to keep a community going on Lemmy instead of Reddit. All it needs is a handful of dedicated people to keep finding and posting links, and the discussions and memes will easily follow.

I don’t need to know that I know the links are coming from Gallowboob, in fact, I never cared who posted the links or started the discussions. Reddit has had the “friends” feature for close to a decade now, and I have “friended” less than a handful of people. There’s literally nothing holding me and most people back except our existing routines.

There is of course still a lot of momentum in reddit communities, and a lot of mods who really don’t want to lose their status. Nevertheless, I’m finding I’m not actually missing much by staying exclusively on lemmy atm and I see a lot of people are realizing the same thing increasingly fast. The finality of the loss of major apps like Apollo, RIF and Sync has already been the final nail for a lot of people.

This exodus might already be unstoppable unless reddit completely capitulates and goes back on their API plans. But I don’t hold my breath on this.

Feel free to come and hang out at the Divisions by zero lemmy instance btw. We’ll do fun things!

Client and Bot Updates

Lucid Creations

Lucid Creations has gotten a few more updates, with the most recent being the ability to load your generation settings from a previous generation you have saved. Plus a few other quality of life fixes

More Styles

The AI Horde stylelist has received Lora support and about 40 new ones have been added using the awesome work by konyconi. Likewise a new category called “stylepunks” has been added which will generate using random styles from those new ones.

Mastodon & Reddit Bots

The Mastodon and Reddit bots have been updated to support LoRa styles

Discord Bot

The official AI Horde discord bot has also been updated by ZeldaFan to support the new LoRa styles, but no categories support yet. Hopefully soon.

Other Clients

A ton of other clients are receiving flood of updates. Particularly ArtBot and aislingeach are both seeing very rapid development. Remember to check their individual discord channels if you haven’t checked them out yet

Hordelib repo is temporarily down due to DMCA

As I mentioned last time, we received a DMCA take-down for a couple of missing attributions on AGPL3 files. We have since added those back, but as is the case with DMCAs, once you start them, they don’t stop on their own.

We had already send a counter-notice, but unfortunately not fast enough. The Hordelib repo has now been hidden from the world. You can read the whole bogus DMCA requesting it here.

I remind that hlky had made no attempt to request those attributions informally, just went straight to DMCA, which tracks since this person is always acting in bad faith. It just in this case it’s a convenient way to attack the AI Horde project once more.

This is a minor inconvenience, as the hordelib is still available in pypi, but annoying nonetheless.

Hopefully we’ll be back soon enough, and until then we’ll just re-host elsewhere for a while.

The AI Horde now seamlessly provides all CivitAI LoRas

Demand for AI Horde to support LoRas was high ever since they were discovered and unleashed into the Stable Diffusion ecosystem. About 2 months ago we were really close to releasing support for them but alas “stuff happened” which culminated in us having to completely rewrite out inference backend.

Fortunately, ComfyUI already supported LoRas, so once we switched to using it as our inference library, we just needed to hook into its LoRa capabilities. Myself along with the extremely skilled hordelib devs, Jug and Tazlin have been working constantly on this for the past half a month or so and today I’m proud to announce that the Lora capability has been released to all AI Horde Workers!

You might have seen earlier that some Client like Artbot, the Krita plugin and Lucid Creations were already supported LoRas. This was however only using the few beta testing workers we had. Now those clients will start seeing a lot more speed as worker with the power to do so start running them.

This also signifies our tightest integration with CivitAI yet! We we were already utilizing them, but mostly to just pull stable diffusion checkpoint files. The LoRa integration however goes far far further than that, using them as an integral part of our discovery and control process.

Through the very useful REST API provided by CivitAI, we have developed a method where we can discover, download and utilize any LoRa that is hosted there. No more do you need to search, validate compatibility, download etc. The AI Horde and its Worker will handle that automatically!

To showcase this point, I’ve made a small video showing off this capability in Lucid Creations which I’ve tightly integrated into CivitAI, so that it can look for any LoRa based on ID or partial name, display all relevant information and even warns you of compatibility issues.

Sorry about the sound quality. I don’t have a professional streamer setup, just a shitty webcam 😀

I’ve put a lot of effort into making finding and using LoRas as painless as possible, because there’s literally dozens of thousands of the things. Manually searching through the website and copy-pasting IDs is just a complete PITA. Never mind downloading and trying them out one-by-one.

Through the AI Horde, you can simply type the partial name of what you want, and keep mix and matching them until you achieve the desired outcome! Changing and trying new LoRas now takes literally a few seconds!

Also many kudos to the CivitAI devs which quickly implemented one request I put through to make this even easier and faster for everyone.

As this has just been released, currently only a few UIs support this. Lucid Creations which has full discovery, and ArtBot which requires you to know the ID. The Krita plugin is also supporting them. I expect the other UIs to soon follow in the coming days.

Hopefully this is another step in unleashing your creativity, without requiring a powerful GPU of your own!

Stay tuned for the next update adding access to all Textual Inversions as well.

Lucid Creations is the first client to support LoRa on the AI Horde!

I don’t have time to keep Lucid Creations well updated but sometimes I just need to show others how it’s done!

So I managed to the first UI to support LoRa on the AI Horde!

In case you don’t know what LoRa are, the short version is nothing less than a breakthrough in Generative AI technology, by allowing to condense the time and power a training (or “fine-tuning”) needs!

This kind of breakthrough, along with the achievements done by new models such Stable Diffusion and Llama is what is causing the big tech companies to scramble, to the point that OpenAI had to run to the government to ask them to regulate future breakthroughs (but please, please don’t regulate what they’re currently doing, OK?)

But I digress. I haven’t yet officially announced LoRa support on the AI Horde yet, as we’re still trying to squash all bugs on the workers, but I hope the addition to Lucid Creations will help other integrators see how it can be done

AI Horde’s AGPL3 hordelib receives DMCA take-down from hlky

I have tried to avoid writing about hlky drama for the sake of the AI Horde ecosystem. I don’t want to delve into negative situations and I was hoping by ignoring this person our community can focus on constructive matters in improving the Open Source Generative AI tools.

However recent developments have forced my hand, and I feel I need to write and inform the larger community about this. I will attempt to stick to the facts.

The AI Horde Worker includes a customized library: hordelib.This library is completely based on ComfyUI.

Yesterday we were forwarded 2 DMCA take-down requests from GitHub originating from hlky requesting to take down hordelib because of claims against a couple of files I ported from the previous library I was co-authoring with hlky, nataili.

Nataili was developed as AGPL3 from the start. This is the main reason I chose it as the backend to the AI Horde Worker instead of using a bigger player like Automatic1111 WebUI (which, back then, did not have a license.)

Unfortunately a big reason we abandoned nataili, is because hlky attempted to sabotage the AI Horde ecosystem and demanded that we stop using the nataili free software source library, going against everything the Open Source movement stands for. There is more drama pertaining to this behind the scenes, but as I said, I want to stick to the public facts in this post.

Nevertheless, eventually we couldn’t maintain nataili so we decided to create hordelib instead which would also insulate us from hlky. However some critical components we needed for supporting our image alchemy and anti-CSAM capabilities were not available natively in comfyUI, so I ported over the necessary files from nataili for those purposes. Remember, these are files licensed under the AGPL3, so this is completely and irrevocably allowed.

In the process I stripped out the explicit license mention in those files, because our whole repository is licensed under AGPL3, and it goes against our style to add unnecessary licenses to each file. As far as I understood, this was allowed by the license terms.

The DMCA take-down claims that removing those copyright and license strings from those files is a sufficient reason to request the whole repository to be taken down!

I have since attempted to get some clarify on this issue on my own. The only relevant part from the license I can see is this

Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:

[…]
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or

AGPL3 License

And I mean, fair enough, this seems clear enough, but I need to point out that the original licenses put in those files by hlky did not require preservation of author attributions!

Nevertheless in the interest of expediency and in the spirit of open source I have since re-added the attributions to those files.

Unfortunately, once you send official DMCA notices, things start becoming serious and you never know which way the dice roll will go on this. I feel we have a pretty clear-cut case that we did nothing wrong here and certainly nothing that would require a whole FOSS library to be taken down!

I have sent a counterclaim to GitHub in an attempt to ensure they don’t take any take-down steps.

However, given the numerous bad faith acts by hlky to this day, the most prudent option would be to excise these files completely. I would rather not have any mention or contribution of this person in our library, as they go against everything the Free Software movement stands for!

If you have the skills to contribute an alternative code for a clip and blip interrogation modules, please contact me ASAP!

Likewise if you have any advice you can give on this issue I’d appreciate it.

The AI Horde Aesthetic and Artifacts Rating now on HuggingFace

While the gathering of aesthetic images never slowed down, the resulting dataset was only ever available from an specific url in the ratings API, making it fairly difficult to discover. To be honest, I was expecting by now, at least some models and other applications would have come out.

To help researchers and other interested parties discover this easier, I decided to upload the dataset file into huggingface. You can find it now here: https://huggingface.co/datasets/Haidra-Org/AI-Horde-Ratings

The benefit of this location is I can now also provide a README file along with it, explaining the columns in the parquet file, and providing other useful info, such as a huggingface demo.

Epic AI Horde Documentation Update

Today I start by wanting to do some work around the upcoming LoRas functionality for the AI Horde, however I first wanted to update my AI Horde README a bit, as it was still referring things like the KoboldAI Horde (which has been merged into the AI Horde since a while now).

As I was finishing it, I noticed that the written FAQ had also grown a bit stale, and there’s a few new points I feel I need to make absolutely clear, such as kudos sales being forbidden, or how the kudos calculation being handled by a new NN model etc.

It quickly dawned on me however that there’s a ton of Horde terminology I’m using in the FAQ, which someone might not understand. Now we have Dreamers, Scribes and Alchemists, Workers and Bridges, Integrations, Frontends, Plugins and Bots etc. Quite a few things for someone unfamiliar

So I decided to try writing some cross-referenced terminology in the context of the AI Horde, but the more I kept writing, the more terms I realized I needed to define as well. And once I started pulling on this thread…well, let’s just say, that I spend a bit more time than I planned on documentation today and now the Terminology wiki page is available which is extensively cross-referenced inside itself, and to Wikipedia as needed.

I honestly had to forcefully stop myself on expanding it, because I kept thinking of more things to define. So if you’re the kind of person who likes doing this sort of thing as well, feel free to improve further!

State of the AI Horde – May 2023

I’ve been meaning to write one of these posts every month, but the events since I wrote the last piece have been fairly disruptive. With the loss of our maintainers of nataili, we’ve been forced to put our heads down to alleviate the backend issue, and I didn’t want to write another “State of the AI Horde” until that business was completed. More about that later. First, let’s look at the basics.

(In case you’re new here and do not know what is the AI Horde: It is a crowdsourced free/libre software Open API which connects Generative AI clients to volunteer workers which provide the inference.)

More Requests, More Power!

The total amount of images generated has stayed relatively stable since the last month, with only ~12M images generated. However the total amount of Terapixelsteps is up to 4.2TPS compared to 3.7TPS last month. This shows that people are looking for more details at higher resolution instead of just grabbing more images. Unfortunately we can’t capture the impact of ControlNet as easily, but suffice to say, its demand is pretty significant.

On the LLM front, we’ve generated 3.5M texts, for a total of 374 Megatokens. We have effectively tripled the LLM output! This makes sense as we see plenty of traction in the LLM communities since Google Colab started banning Pygmalion.

Also, did you know Cubox from our discord community has gone and setup a whole public Grafana server to track our stats? Head over and check it out yourselves.

Top 10 Stable Diffusion models

No significant changes since last month in this chart, with Deliberate still proving its worth and further solidifying its position with a 25% use rate throughout the entire month. Stable Diffusion 1.5 sees plenty of use too.

Anime models seem to be losing some popularity across the board, and Dreamshaper kicked Abyss OrangeMix off the board so as to settle on a decent 5%. Good showing!

  1. Deliberate 25.8% (3067246)
  2. stable_diffusion 14.4% (1707729)
  3. Anything Diffusion 9.0% (1067386)
  4. Dreamshaper 5.0% (596651)
  5. Realistic Vision 4.7% (563079)
  6. URPM 3.6% (428415)
  7. Hentai Diffusion 2.9% (348985)
  8. ChilloutMix 2.6% (314301)
  9. Project Unreal Engine 5 2.6% (307907)
  10. Counterfeit 2.2% (260739)

Top 10 Text models

As is to be expected, Pygmalion 6b is still leading the charts with a significant margin, but its big brother, the recently released Pygmalion 7b, which is based on the Llama models, has already secured 3rd place and is only set to further cannibalize 3b’s position.

Erebus and Nerybus continue to fill up the rest of the board with a few new 4bit models starting to finally come into the fray.

  1. PygmalionAI/pygmalion-6b 44.5% (1571109)
  2. KoboldAI/OPT-13B-Erebus 9.8% (345983)
  3. PygmalionAI/pygmalion-7b 5.2% (183413)
  4. KoboldAI/OPT-2.7B-Nerybus-Mix 3.8% (135223)
  5. KoboldAI/OPT-13B-Nerybus-Mix 3.8% (132564)
  6. bittensor/subnet1 2.5% (86859)
  7. Pygmalion-7b-4bit-32g-GPTQ-Safetensors 2.3% (79577)
  8. gpt4-x-alpaca-13b-native-4bit-128g 2.0% (72358)
  9. pygmalion-6b-gptq-4bit 1.8% (64750)
  10. OPT-6.7B-Nerybus-Mix 1.8% (62300)

Image Ratings keep flowing in

Adding new ratings continue unabated but we’re still finding people trying to bypass our countermeasures and just automatically rate images to poison the dataset. Please don’t do that. If you want kudos, you can easily get more by simply asking in our discord instead of wasting our time 🙁

One interesting note is that the Stable UI has finally bypassed the Artbot on amount of images rated

 count  |        client        
--------+----------------------
   6270 | AAAI UI
 110826 | ArtBot
 279112 | Stable UI
   3197 | Unknown
   2700 | ZeldaFan Discord Bot

A total of 400K ratings over the past month. Very impressive.

I have also finally on-boarded the full DiffusionDB dataset of 12M images, into the ratings DB, we we’ll have enough images to rate for the foreseeable future.

A whole new backend

As I mentioned at the start, the big reason this state of the AI horde was delayed, was the need to switch to a completely new backend. It’s a big story, so if you want to read up on some details on that, do check out the devlog about it.

New Features and tweaks

Not much to announce since all our effort was in the backend switch, but a few things nevertheless

  • We have added support for Shared Keys, so that you can share your account priority, without risk.
  • A new kudos calculation model has been put live, instead of the manual way I was calculating kudos for each request based on magic numbers. The new kudos model is a Neural Network trained by Jug, and it takes into account the whole payload and figures out the time it would take to generate empirically.
  • Kudos consumption has been removed for weights, and a tiny kudos tax of 1 kudos has been added per request to discourage splitting the same prompt into multiple separate requests.
  • You can now change your worker whitelist into a worker blacklist so that you avoid specific workers, instead of requesting explicitly which ones you want.
  • A new “Customizer” role has been added to allow you to host custom Stable Diffusion models. This is not possible on the Worker yet, but once it is, these people will be able to do it. Getting this role is a ping to the AI Horde moderators.
  • We had to fight back another raid, on the LLM-side this time, which forced me to implement some more countermeasures. Scribes (i.e. LLM workers) cannot connect through VPN unless trusted, untrusted users can only onboard 3 workers, and there’s an worker-limit per IP now.

Discord Updates

I’ve been doing some work to improve the social aspects in our discord server, one of them is onboarding 3 new volunteer community managers through and hoping we can start doing more events and interactions with the community.

Another is the addition of a new channel per Integration I know of. I give the developers/maintainers of these services admin access to that channel so they can use it to provide support to their users, or redirect to their own discord servers if they prefer.

If you have a service integrating into the AI Horde API, please do let me know and I’ll be happy to open a new channel for you and give you admin access to it.

Infrastructure Improvements

Our demand is increasing significantly, as we have way more and more concurrent web sessions every day. Unfortunately a few database downtimes this month convinced me that hosting it in a Contabo VM with 30% CPU steal is not feasible anymore.

So I finally finalized switching the Database to a dedicated server. It’s a lot more expensive but massively worth it as our response times have increased 5-fold! A worker payload pop that used to take 1-3 seconds, now can take 0.2 – 1 second! As a result the whole AI horde should feel way snappier now.

I have also added our first dedicated box API front-end. Initially it didn’t look to be doing too good, having some of the worst performance, but once I switched to the dedicated database server, it suddenly became the fastest by far, making very obvious just how much impact latency has.

I have also finally deployed a central log server based on Graylog, which should help me track issues across all servers and look them up historically as well.

Funding and Support

All of the above now means the AI horde is significantly more expensive to run and it’s almost a second full-time job for me at this point. Currently all of this is funded via my patreon subscribers only but that is not scaling quite as fast as my infrastructure costs :-/

To add to this, the stability.ai support seems to have run dry and the GPUs they were providing to the AI Horde have been removed and I haven’t been able to arrange to bring them up again.

So I think I need to more consistently promote the way to help me sustain the AI Horde as a truly free and open API for everyone.

So, please if you see the potential of this service, consider subscribing as my patron where I reward people with monthly priority for their support.

If you prefer, you can instead sponsor me on github.

I am also looking for business sponsors who either see the value of a Crowdsourced Generative AI Inference REST API or might want to use it for themselves. I basically have no contacts in the AI world so I would appreciate people forwarding this info to whoever might be interested.

Final Word

The last month has been very difficult for the AI Horde but fortunately I’ve had the help of some really valuable contributors from both the AI Horde community as well as the KoboldAI community and we managed to pull through.

Now with our new backend I expect new features to come much faster and of more quality. We’re already working hard on LoRa support horde-wide for one, so stay tuned!

The AI Horde Worker Moves to a Completely New Inference Backend

Close to a month an a half ago, our last remaining maintainer for the nataili library dropped out and we were left functional but “rudderless” as far as inference goes. We could continue operations, but we couldn’t onboard new features anymore as neither me nor any of the remaining regulars have ML knowledge.

In desperation, I asked one of our regulars, Jug, who had been helping out with some python work on the worker if he thinks it would be possible to switch to the ComfyUI software as a backend, as it had some good ideas and was modular enough to be of use to us

To my surprise Jug not only thought it was a good idea, but jumped with both legs in the deep and started hacking around to make it work. Not only that, but we managed to suck-in another regular developer in, Tazlin, who also started helping us with design best practices. As a result, the new library we started developing was built from the ground up to have extensive coverage support which will make us discover regression bugs that much easier.

First steps were to develop feature parity, and that required not only to wrangle the comfyUI pipelines to be called from nataili, but also to port features which we were using in the AI Horde Worker, such as clip, over to the comfyUI.

This early phase was were I could still provide some help, as I’m pretty good at porting features and writing tests for them, and then integrating stuff into the AI Horde Worker, but still the lion’s share of the work on hordelib was being done by Jug, with Tazlin making the code much more reliable and maintainable.

A couple of weeks in, we had almost all the features we needed, but this is where the tricky business started. First we noticed is that comfyUI was not handling multi-threading well, which make sense as it’s meant to be used by a single user on a single PC. That added massive amounts of instability, because our AI Horde Worker is using threads for everything, to nullify latency delays.

So the next phase for about two more weeks was stabilizing the thing, which required a much deeper dig into the comfyUI internals to wrangle individual processes into a multi-threaded paradigm.

Finally that was done, about 1 month after I inquired about moving to comfy. Then we discovered the next problem: Due to all the mutex locks to prevent multi-threaded instability, the whole things was now much slower than nataili was. Like significantly so!

So another two weeks were spend of figuring out where to slowdowns occurred in our implementation and tweaking things to work more optimally, and even trying to figure out if there was indeed a slowdown in the first place as comparisons with the nataili was difficult to achieve.

We even built a whole benchmark suite to see overall speeds in inference, without getting confused with HTML and model loading latency.

But beta testers were still informing us of a seemingly lower kudos reward, so then we suspected the old way of calculating kudos was not applying well to the hordelib inference, due to it working differently. For example it has no slowdown for weights, but control-net types gave out different speeds than we expected, even different speeds per control type.

To track this down, Jug trained a new Neural Network for figuring how much time a generation is expected to take, rather than try to time each individual feature. The new model was so successful at 96% accuracy, that we decided to onboard it onto the AI Horde itself, as a way to calculate kudos more accurately.

This investigation did point us to some things that worked unexpectedly within comfyUI, for example longer prompts than 77 tokens tended to be quite slower, which was a quality thing after speaking with the comfyUI devs. We did discover a workaround for the AI Horde but it’s these sort of things that are introducing unexpected slowdowns compared to before. We’re going to continue looking for and tweaking things as we discover them.

The good news is that the overall quality of images using the comfyUI branch has increased across the board. Not only that but weights not only don’t add extra slowdown (so the extra kudos cost is removed), but they can also exceed 1.3 without causing the image to distort, which is how most other UIs are using them anyway.

The big change is that images with the same payload and the same seed, will look different in comfyUI compared to nataili. This is simply due to the way inference works and something we’ll have to live with.

1.0.0

So now we have the three pillars built: Parity, Stability and Speed; it’s time to go live!

The hordelib has been bumped to 1.0.0 and the AI Horde Worker to 21.0.0. When you run update-runtime next time, you’ll automatically be switched to the new inference backend but you may need to update your bridgeData.yaml file ahead of time.

Very shortly

  1. Set the vram_to_leave_free and ram_to_leave_free to values that work for you.
  2. rename nataili_cache_home to cache_home
  3. You can delete any unused keys (like disable_voodoo)

Also as a user of the AI Horde, keep in mind that the new Workers do not yet support tiling and pix2pix

But not only if the new inference available for the AI Horde, but also for everyone else. Due to the generic way we’ve built it, any python project which needs access to image generation can now import hordelib from pypi, and get access to all the multi-threaded text2img and img2img functionality we provide!

What’s next

With the move to hordelib, we are now effectively outsourcing our inference development upstream, which allow us to get to use new developments in stable diffusion as they get on-boarded into their software. Hopefully development of ComfyUI will continue for the foreseeable future as I am really not looking forward to changing libraries again any time soon >_<

This also means that we now finally have the capability to onboard LoRas and textual inversion as well which have been requested for a long time, but we never had the capability in our backend. Likewise with new Stable Diffusion models and all the exciting new developments happening practically weekly.

It’s been a lot of hard work, but we’re coming out of it stronger than ever, thanks to the invaluable help of Jug, Tazlin and the rest of the AI Horde community!