State of the AI Horde – 26/03/2023

Things are progressing very rapidly in this dawn of the AI and likewise for the AI Horde. I thought it would be a good idea to post about all the things that changed and improved in recent days for our service.

More Requests. More statistics.

I’ve deployed endpoints to measure the usage of the AI horde. Now that one month has passed, we can take a look.

  • Per day, we are averaging 356,378 images (3.7 terapixelsteps) and 45,248 texts (4 megatokens)
  • In the past month, we produced 11,475,183 images, generating a staggering 127.6 terapixelsteps. Text has also picked up significant speed since merging the hordes with 1,241,895 generated texts for a total of 112.8 megatokens!

Top 10 Stable Diffusion models

The AI Horde offers close to 200 models at the same time. Our statistics allows us to see how the popularity of the various models changes day to day and month to month. The below are just the top 10 models being used.

  • Deliberate 22.2% (2550591)
  • stable_diffusion 15.1% (1730426)
  • Anything Diffusion 11.0% (1257688)
  • Hentai Diffusion 4.1% (468473)
  • Realistic Vision 3.0% (338742)
  • Counterfeit 2.7% (310337)
  • URPM 2.6% (297853)
  • Project Unreal Engine 5 2.5% (289006)
  • waifu_diffusion 1.8% (211572)
  • Abyss OrangeMix 1.8% (205268)

For the longest time SD 1.5 (stable_diffusion above) was king, but in the past month, Deliberate has confidently taken the lead and has been leading the pack with a staggering 20% of all image requests passing through the AI Horde! This speaks very highly for the popularity of the model

Top 10 Text models

Almost as many text models exist for the AI Horde, but they’re more varied. However last months saw the release of two big milestones, the Pygmalion models for chat-like generation, which happened after the gimping of the Character AI models. The new Llama model was also released, bringing unparalleled miniaturization of the model size, allowing consumer GPUs far more coherence.

  1. PygmalionAI/pygmalion-6b 52.4% (651566)
  2. KoboldAI/OPT-13B-Erebus 14.0% (174393)
  3. KoboldAI/OPT-6.7B-Erebus 6.7% (83249)
  4. KoboldAI/OPT-6.7B-Nerybus-Mix 3.8% (46747)
  5. KoboldAI/OPT-13B-Nerybus-Mix 2.8% (35110)
  6. KoboldAI/OPT-13B-Nerys-v2 2.7% (33667)
  7. Facebook/LLaMA-13b 1.9% (23367)
  8. KoboldAI/OPT-6B-nerys-v2 1.9% (23232)
  9. OPT-6.7B-Nerybus-Mix 1.6% (19268)
  10. KoboldAI/OPT-2.7B-Erebus 1.0% (12464)

We can see Pygmalion has immediately dominated text generation, with Mr.Seeker’s storytelling models mopping up the rest, but the Llama Ascendancy is just beggining!

Ratings, botting and counter-measures

A few months ago we started collecting ratings for the LAION non-profit to help improve the models existing in the commons, as the success of midjourney has a lot to do with them training their models with the best images their previous generation created.

The initial design was very simple to allow integrators to onboard it fast and giving good kudos rewards for those helping us. Unfortunately people almost immediately started abusing this by creating bots to rate randomly, therefore poisoning our collection’s accuracy.

I always knew this was a possibility but I was hoping I wouldn’t be forced to add countermeasures quite so soon. So I spent quite a few days adding a captcha mechanism (along other things) to block at least the low hanging fruit.

It immediately led to a drop in ratings per day which automatically shows just how much damage botted ratings were doing

New Features

We are fortunate enough to have gathered some great collators for the inference aspect of the AI Horde. So I wanted a big shout-out.

  • ResidentChief has stepped up strongly to help add new features and squash bugs in the nataili library. As a result the AI Horde now supports inpainting on many more models, a lot more post-processors, such as more upscalers and background removers, controlnet improvements, and so many other stuff too numerous to mention. They’re a beast!
  • Jug has been working on improving the AI Horde worker practically non-stop. Giving us a great terminal control, and improving the webui. Plus a lot of bugfixes and improvements in the bridge part of things
  • Tazlin who’s been doing a great deal of tech support in the channels as well as helping me detect and figure out malicious ratings. And also sending some code improvements as well!
  • Aes Sedai who’s been putting a ton of work on improving the moderation capabilities of the AI Horde with a custom frontend.

And of course all the frontend integrators like rockbandit, aqualxx, sgt.chaos and concedo, who’ve been keeping the frontends up to date, with a lot of features smartly using the capabilities of the AI Horde in ways even I had not expected!

CI/CD and pypi

I finally got around to adding CI/CD pipelines for AI Horde Worker and nataili. Now they will be automatically versioned when the right tag is applied to a PR. The Nataili package has also been republished to pypi and will also automatically receive new versions whenever we publish a new release on GitHub.

The notifications also automatically publish a notification on discord, so people can be aware when something new is up.

Alchemists

Using the new post-processing improvements from ResidentChief, I’ve expanded the interrogation worker so that it can now perform post-processing on images, as well as img2text operations. Unfortunately the previous name didn’t fit so well, so now I’ve renamed it to “Alchemist”, to signify it’s capability to convert images to something else.

Likewise, the official names for image worker is now “Dreamer” and text worker is now “Scribe”. Why not 🙂

Final Word

The pace of progress in this space is mind-blowing. I can’t wait to see what we achieve together in the coming days!

Fantasy.ai is how the enshittification of Stable Diffusion begins

Fantasy.ai has gotten into hot water since its inception, which for a company which is based on the Open Source community, is quite impressive feat on its own.

For those who don’t know, basically fantasy.ai goes to various popular model creators and tempts them with promises of monetary reward them for their creative work, if only they agree to sign over some exclusive rights for commercial use of their model, as well as some other priority terms.

It’s a downright Faustian deal and I would argue that this is how a technology that begun using the Open Source ideals to be able to counteract the immense weight of players like OpenAI and Midjourney, begins to be enclosed.

Cory Doctorow penned an excellent new word for the process in which web2.0 companies die – Enshittification.

  • First they offer an amazing value for the user, which attracts a lot of them and makes the service more valuable to other businesses, like integrating services and advertising agencies.
  • Then they start making the service worse for their user-base, but more valuable for their business partners, such as via increasing the amount of adverts for the same price, selling user data and metrics, pushing paid content to more users who don’t want to see it, and so on.
  • Finally once their business partners are also sufficiently reliant on them for income, they tighten their grip and start extracting all the value for themselves and their shareholders, such as by requiring extravagant payment from businesses to let people see the posts they want to see, or the products they want to buy.
  • Finally, eventually, inexorably, the service experience has become so shitty, so miserable, that it breaches the Trust Thermocline and something disruptive (or sometimes, something simple) triggers a mass exodus of their user base.
  • Then the service dies, or becomes a zombie, filled with more and more desperate advertisers and an ever increasing flood of spam as the dying service keeps rewarding executives with MBAs rather than their IT personnel.

Because Stable Diffusion is built as open source, we are seeing an explosion of services offering services based on it, crop up practically daily. A lot of those services are trying to discover how to stand out compared to others, so we have a unique opportunity to see how the enshittification can progress in the Open Source Generative AI ecosystem.

We have services at the first stage, like CivitAI which offer an amazing service to their user-base, by tying social media to Stable Diffusion models and fine-tunes, and allowing easy access to share your work. They have not yet figured out their business plan, which is why until now, their service appears completely customer focused.

We have services, like Mage.space which started completely free and uncensored for all and as a result quickly gathered a dedicated following of users without access to GPUs who used them for free AI generations. They are progressing to the second stage of enshittification, by locking NSFW generations behind a paywall, serving adverts and now also making themselves more valuable to model creators as soon as they smelled blood in the water.

We do not have yet Stable Diffusion services at the late stage of enshittification as the environment is still way too fresh.

Fascinatingly, the main mistake of Fantasy.ai is not their speed run through the enshittification process, but rather attempting to bypass the first step. Unfortunately, fantasy.ai entered late in the Generative AI game, as its creator is an NFT-bro who wasn’t smart enough to pivot as early as the Mage.space NFT-bro. So to make up the time, they are flexing their economic muscles, trying to make their service better for their business partners (including the model creators) and choking their business rivals in the process. Smart plan, if only they hadn’t skipped the first step, which is making themselves popular by attracting loyal users.

So now the same user-base which is loyal to other services has turned against fantasy.ai, and a massive flood of negative PR is being directed towards them at every opportunity. The lack of loyalty to fantasy.ai through an amazing customer service is what allowed the community to more clearly see the enshittification signs and turn against them from the start. Maybe fantasy.ai has enough economic muscle to push through the tsunami of bad PR and manage to pull off step 2 before step 1, but I highly doubt it.

But it’s also interesting to see so many model creators being so easily sucked-in without realizing what exactly they’re signing up for. The money upfront for an aspiring creator might be good (or not, 150$ is way lower than I expected), but if fantasy.ai succeeds in dominating the market, eventually that deal will turn to ball and chain, and the same creators who made fantasy.ai so valuable to the user-base, will now find themselves having to do things like bribe fantasy.ai to simply show their models to the same users who already declared they wish to see them.

It’s a trap and it’s surprising and a bit disheartening to see so many creators sleepwalking into it, when we have ample history to show us this is exactly what will happen. As it has happened in every other instance in the history of the web!

AI-powered anti-CSAM filter for Stable Diffusion

One of the big problems we’ve been fighting against since I created the AI Horde was attempts to use it to generate CSAM. While this technology is very new and there’s a lot of question to answer on whether it even is illegal to generate CSAM for personal use, I erred on the safe side and made it a rule from the start, that the one thing that is going against the AI Horde, is such generated content without exceptions.

It’s is not a overstatement to say I’ve spend weeks of work-hours on this problem. From adding capabilities for the workers to set their own comfort level through a blacklist and a censorlist and a bunch of other variables, to blocking VPN access, to the massive horde-regex filter that sits before every request and tries to ascertain from the prompt sent whether it intends to generate CSAM or not.

However the biggest problem is not just pedos, it’s is stupid, but cunning pedos! Stupid because they keep trying to use a free service which is recording all their failed attempts without a VPN. Cunning because they keep looking for ways to bypass our filters.

And that’s where the biggest problem lied until now. The regex filter is based on language which is not only flexible about the same concept, but very frustratingly, the AI is capable of understanding multiple typos of various words and other languages perfectly well. This strains what I can achieve with regex to the breaking point, and led to a cat&mouse game where dedicated pedos kept trying to bypass the filter using typos and translations, and I kept expanding the regex.

But it was inherently a losing game which was wasting an incredible amount of my time, so I needed to find a more robust approach. My new solution was to onboard image interrogation capability to the worker code. The way I go about this is by using image2text, AKA image interrogation. It’s basically AI Model which you feed an image and number of words or sentences and it will tell you how how well each of those words is represented in that image.

So what we’ve started doing is that every AI Horde Worker will now automatically scan every image they generate with clip and look for a number of words. Some of them are looking for underage context, while some of them are looking for lewd context. The trick is detecting one, or the other context is OK. You’re allowed to draw children, and you’re allowed to draw porn. It’s when these two combines that we filter goes into effect and censors the image!

But this is not even the whole plan. While the clip scanning on its own is fairly accurate, I further tweaked my approach by taking into account things like the value of other words interrogated. For example I noticed that when looking for “infant” in the generated image pregnant women would also have a very high rating for it, causing the csam-filter to censor out naked pregnant women consistently. My solution was then to also interrogate for “pregnant” and if the likelihood of that is very high, adjust the threshold to hit infant higher.

The second trick I did was to also utilize the prompt. A lot of pedos were trying to bypass my filters (which were looking for things like “young”, “child” etc) by not using those words, and instead specifying “old”, “mature” etc in the negative prompt. Effectively going the long route around to make Stable Diffusion draw children without explicitly telling it to. This was downright impossible to block using pure regex without causing a lot of false positives or an incredible amount of regex crafting.

So I implemented a little judo-trick instead. My new CSAM filter now also scans prompt and negative prompt for some words using regex and if they exist, also slightly adjusts the interrogated words based on the author intended. So let’s say the author used “old” in the negative prompt, this will automatically cause the “child” weight to increase by 0.05. This may not sound by a lot, but most words tend to variate from 0.13 to 0.22, so it’s actually has a significant chance to push a borderline word (which it would be at a successful CSAM) over the top. This converts the true/false result of a regex query, into a fine-grained approach, where each regex hit reduces the detection threshold only slightly, allowing non-CSAM images to remain unaffected (since the weight of the interrogated word would start low) while making more likely to catch the intended results.

Now the above is not the perfect description of what I’m doing, in the aim of keeping things understandable for the layperson, but if you want to see the exact implementation you can always look at my code directly (and suggest improvements 😉 ).

In my tests, the new filter has fairly great accuracy with very few false positives, mostly around anime which makes every woman look extraordinarily young as a matter of fact. But in any case, with the amount of images the horde generates I’ll have plenty of time to continue tweaking and maybe craft more specific filter for the models of each type (realistic, anime, furry etc)

Of course I can never expects this to be perfect, but that was never the idea. No such filter can ever catch everything, but what my hope is that this filter, along with my other countermeasures like the regex filter, will have enough of a detection rate to frustrate even the most dedicated pedos off of the platform.

Short-circuiting my ASD

In my 40 years of life, I managed to pick up a lot of coping mechanisms to handle social situations. A lot of my reactions are copied instead of originating internally. It’s just something I do, because I know I’m expected to, and life is easier when I do so.

I think of it like this: I have catalogued all the emotional reactions I’m expected to have and put them in a my mental database.I have also put an index to them so whenever that social situation comes up, I lookup the reactions I’m expected to have and use the one most appropriate. Eventually my own feeling also surface, and sometimes the situation is something that even people like me can empathize or be affected by (usually, injustice).

But sometimes, I run into something I’ve never reacted to before, and my brain completely short-circuits and I just end up with no reaction at all. Such a situation happened just now.

I honestly have no idea what the appropriate reaction to these news is. Pity? Celebration? Comfort? I got no fucking clue. I wanted to defaulted to advice, but would that be insulting them? In the end I just was honest about it (which by itself requires enough mental fortitude)

It’s this situations that often make it obvious (even to me) how much coping mechanisms I had to create to handle the world smoothly. It’s therefore doubly funny when people who know me don’t even realize I am Neurodivergent. Likewise I see the stress and anguish my 10yo child has, who is high-functioning like me but hasn’t learned coping mechanisms them yet and is constantly tortured by his peeps for not reacting or handling situations “normally”.

Merging of the Hordes. The AI Horde is live!

A while back (gosh, It occurs to me this project is half a year old by now!) I took significant steps to join the two forks I had made of the AI Horde (one for Stable Diffusion and one for Kobold AI) as they diverging code was too difficult to maintain and keep up to parity with features and bug fixes I kept adding.

Then later on, I realized that my code just could not scale anymore, so I undertook a massive refactoring of the code-base to switch to an ORM approach. Due to the time criticality of that refactor (at the time, the stable horde was practically unusable due to the sheer load), I focused on getting the stable horde API up and running and disregarded KoboldAI API, as that was running stable on a different machine and didn’t have nearly as much traffic to be affected.

Once that was deployed a number of other fires had to be constantly be put out and new features on-boarded as Stable Diffusion is growing by leaps and bounds. That meant I never really had a time to onboard the KoboldAI to the ORM as well, especially since the code required refactor to allow two types of workers to exist.

Later on, I added Image Interrogation capabilities as well, which incidentally required that I set up the horde to handle multiple types of workers. This lead me to figuring out how to do ORM class inheritance (which required me figuring out polymorphic tables and other fun stuff) but it also meant that a big part of the groundwork was laid to allow me to add the text workers (which is the kind of thing that does wonder to get my ADHD brain to get over its executive dysfunction).

Since then, it’s been constantly on the back of my mind that I need to finally do the last part and merge the two hordes into a single code base. I had kept the KAI horde into a single lonely branch called KAI_DO_NOT_DELETE (because I deleted the other branch once during branch cleanup :D) and the single-core horde node running. But requests for improvements and bug fixes on the KAI horde kept coming, and the code base was so diverged by now, that it was quite a mess to even remember how to update thing properly.

The final straw is when I noticed the traffic to the KAI Horde had also increased significantly, probably due to the ease of using it through KoboldAI Lite. It was getting closer and closer to the point where the old code base would collapse under its own weight.

So it was time. I blocked my weekend off and started the 4th large refactoring of the AI horde base. The one which would allow me to use the two horde types which were mutually exclusive in the past, at the same time.

This one meant a whole new endpoint, new table polymorphism and going through all my database functions to ensure that all the data is fetched from all types of polymorphic classes.

I also wanted to make my endpoints flexible as well, so it occurred to me it would be better to to have say api/v2/workers?type=text instead of maintaining api/v2/workers/image and api/v2/workers/text independently. This in turn run into caching issues, as my cache did not recognize the query part to store independently (and I am still not sure how to do it), so I had to turn to the redis cache.

That in turn caused by bandwidth to my redis cache to skyrocket, so now I needed to implement a local redis cache on each node server as well, which required rework for my code to handle two caches at the same time. It was a cascading effect of refactoring 😀

Fortunately I managed to get it all to work, and also updated the code for the KoboldAI Client and its bridge to use the new and improved version2 of the API and just yesterday, those changes were merged.

That in turn brought me to the next question. Now that the hordes were running together, it was not anymore accurate to call it “stable horde”, or “koboldai horde”. I had already foreseen this a while ago and I had renamed my main repo to the AI Horde. But I now found the need to also serve all sorts of generative AI content from the main server. So I made the decision to deploy a new domain name. And the AI Horde was born!

I haven’t flipped all the switches needed yet, so at the moment the old https://stablehorde.net is still working, but the eventual plan is to make it simple redirect to https://aihorde.net instead.

The KAI community is happy and I’m not anymore afraid they’re going to crash and burn from a random DB corruption and they can scale along with the rest of the Horde.

Now onward to more features!

The 150Mbit/s problem

Recently my provider send me a nastygram about my Database VPS using too much bandwidth, 150Mbit/s or more, over 10 days, and how they have already throttled it to 100Mbit/s to avoid affecting other customers.

This caught me by surprise as I know that my Database is the central location where all my nodes converge to pull data, but the transfer between them should be just text. 150 Mbit/s would be insane quantities of text.

Fortunately my provider has also crashed my DB just a day before on the weekend, and their lack of response outside working hours forced me to urgently deploy a new VM with a new postgres DB until they recovered and I had switched all my nodes to use that already. Nevertheless, on checking the new DB, I discovered that it too was using the same incredible amount of bandwidth constantly. This meant that my new DB VM was also on a timer as Contabo throttles you, if your VM takes too much bandwidth for 10 days in a row. I had to resolve this fast.

First order of business was to swap the code so that all source images and source masks used for img2img are also stored in my cloudflare r2 CDN. The img2img requests are about 1/6 of the total stable horde traffic, but until now they were stored as base64 strings inside the database, which means that whenever those requests were retrieved, say for a worker to pick one, they transferred all that data back and forth.

This made a small dent in the traffic, but not nearly enough. I was still at 150Mbit/s outside rush hours and 200Mbit/s during peak demand.

I started getting a bit desperate as I expected that was a big part of this. At this point I decided to open a discord thread to ask my community for help in debugging this as I was getting out of my depth. The best suggestion came from someone who told me to enable pg_stat_statements.

That, along with the query to retrieve the most used queries, lead me to find a few areas where I could do some improvements through caching. One was the worker retrieving the models from the DB all the time for example, which I instead made it cache on redis.

Unfortunately none of my tweak seemed to do much difference in the bandwidth. I was starting to lose my mind. Fortunately someone mentioned the amount of rows retrieved so I decided to sort my postgres statements by amount of rows retrieved and that finally got me the thing I was looking for. There was one statement which retrieved a number of rows two whole orders of magnitude more than every other statement!

That statement was an innocent looking query to retrieve all the performance statistics for all workers, which I then created an average for. Given hundreds of workers and 20 rows per worker, and this statement being retrieved once per second per status check on a request, you can imagine the amount of traffic it generated!

My initial stop was to cache it on redis as well. That just shifted the problem because while there wasn’t a load on the DB, the amount of traffic stayed the same, just on a different VM. So the next step was to try and cache it locally. I initially turned to python cachetools and their TTL function caching. That seemed to work but it was a big mistake. The cachetools are absolutely not thread safe and my code relies on python waitress WSGI server, and that one spawns dozens of threads all over the place.

So one day later, I get reports of random 500 errors and other problems. I look in to find my logs spewing hundreds of logs about missing key errors on the cache. Whoops!

I did make an attempt to see how I can make cachetools thread safe, but that involved adding python locks which would delay everything too much, on ultimately something that is not needed. So instead I just created my own simple cache using global vars and built-in types which are thread-safe by default. That solved the crashing issue.

But then I remembered that I’m stupid and instead of pulling thousands of rows of integers so I can make an average using python, I can just ask PostgreSQL to calculate the average and simply return that final number to me. Duh! This is what happens when my head is not yet tuned into using Databases correctly.

So finally, I wiped my internal cache and switched to that approach. And fortunately that was it! My Mbit/s on my database server dropped from 150 average, to 15! a 10x reduction!

New Discord Bot for the Stable Horde

For a few months now the Stable Horde has had its own Discord bot, developed by JamDon. One of the most important aspects I wanted for the bot (and the reason for its original creation), was the ability to be able to gift kudos to people via emojis, which would serve as a way to promote good behavior and mutual aid.

In the process, the the bot received more and more features, such as receiving the functionality of being able to generate images from the Stable Horde, or getting information about the linked horde account etc.

Unfortunately development eventually slowed and then 2 months ago or so, ago JamDon informed me that they do not have time anymore to continue development. Further complicating things was the fact that the bot was written in JavaScript which I do not speak, which made it impossible for me to continue its development on my own. So it languished unmaintained, as the horde got more and more features and other things started changing. It was the reason why I couldn’t make the “r2” payload parameter true by default for example.

The final straw was when our own bot got IP banned by the horde because it was a public bot and had been added to a lot of servers, which we do not control. And apparently people there attempted to generate unethical images, which the horde promptly blocked. Unfortunately that meant that the bot image generation also stopped working everywhere every time this happened.

At the same time, another discord regular had not only developed their own discord bot based on the stable horde, but a whole JavaScript SDK! The bot was in fact very well developed and had most of the features of the previous stable horde bot plus a lot of new stuff like image ratings. The only thing really missing which was really important, was the ability to gift images via emojis, which was the original reason to get as discord bot in the first place 🙂

Fortunately with some convincing and plenty of kudos, zelda_fan agreed to onboard this functionality, as a few other small things that I wished for (like automated roles), and the Stable Horde Bot was reborn!

Unfortunately this did mean that all existing users were logged out and had to log in once more to be able to use the functionality, and it’s commands did change quite significantly, but those were fairly minor things.

Soon after the new bot was deployed, it was also added to the official LAION discord as well, so that their community could use it to rate images as well. I also checked and the bot has been already added to 365 different servers by now. Fortunately its demand is not quite as massive as it’s not prepared to scale quite as well as the stable horde itself.

BTW If you want to add the bot to your own discord server, you can do so by visiting this link. If you want to be able to transfer kudos, you’ll need to contact me so I onboard your emojis though. But other functionality should work.

The image ratings are flooding in!

It’s been about 2 weeks since we deployed the ratings system to gather data for LAION and once the main UIs on-boarded them, they’ve been coming in at an extraordinary pace!

So I thought I’d share some numbers.

Amount of ratings per client

 count  |     client
--------+-----------------
 175060 | ArtBot
    461 | Discord Bot
    159 | Droom
   4124 | Lucid Creations
   2545 | Stable UI
   9430 | UnknownCode language: SQL (Structured Query Language) (sql)

As you can see, the Artbot ratings are crushing everything else, but to also be fair, Artbot was the first to integrate into the ratings system and is also a very popular UI. The Lucid Creations has added post-generation ratings, but not yet generic ratings. Stable UI and Discord Bot only added ratings a couple days ago. Things are about to pick up even more!

Artifacts per client

 count |     client
-------+-----------------
 15308 | ArtBot
     0 | Discord Bot
     0 | Droom
  1073 | Lucid Creations
  2550 | Stable UI
     2 | Unknown
(6 rows)

And we also can see how many artifact ratings each client has done. Artifacts were added only a couple days ago as well. Still ArtBot dominates, but only by a single order of magnitude instead of two 😀

Rated images count

 count | ratings_count
-------+---------------
 71197 |             1
  7860 |             2
 30140 |             3
  3644 |             4
    71 |             5
    11 |             6
     2 |             7
(7 rows)

 count
--------
 112928
(1 row)

The amount of images which have received at least 1 ratings is very heartwarming. We have 112K images rated with at least 1 rating, of which 30K have 3 rating each!

Total ratings count

 count
--------
 191914
(1 row)

This is just the raw amount of ratings submitted. Almost 200K between generic ratings and post-generation ratings in ~14 days. That is ~13K ratings per day! For free! Just because people want to support something good for the commons!

This is the power of building a community around mutual aid! People love telling others what they think of their work (just see how many ratings midjourney has received) and people also love supporting an endeavour which through FOSS and transparency, ensures they are not being taken advantage to enrich a few at the top!

This is exactly what my dream was for this project. To unite a community around helping each other, instead of benefiting a corporation at the top. And it is working better than I ever expected in this short amount of time!

Y’all are awesome!