Fantasy.ai is how the enshittification of Stable Diffusion begins

Fantasy.ai has gotten into hot water since its inception, which for a company which is based on the Open Source community, is quite impressive feat on its own.

For those who don’t know, basically fantasy.ai goes to various popular model creators and tempts them with promises of monetary reward them for their creative work, if only they agree to sign over some exclusive rights for commercial use of their model, as well as some other priority terms.

It’s a downright Faustian deal and I would argue that this is how a technology that begun using the Open Source ideals to be able to counteract the immense weight of players like OpenAI and Midjourney, begins to be enclosed.

Cory Doctorow penned an excellent new word for the process in which web2.0 companies die – Enshittification.

  • First they offer an amazing value for the user, which attracts a lot of them and makes the service more valuable to other businesses, like integrating services and advertising agencies.
  • Then they start making the service worse for their user-base, but more valuable for their business partners, such as via increasing the amount of adverts for the same price, selling user data and metrics, pushing paid content to more users who don’t want to see it, and so on.
  • Finally once their business partners are also sufficiently reliant on them for income, they tighten their grip and start extracting all the value for themselves and their shareholders, such as by requiring extravagant payment from businesses to let people see the posts they want to see, or the products they want to buy.
  • Finally, eventually, inexorably, the service experience has become so shitty, so miserable, that it breaches the Trust Thermocline and something disruptive (or sometimes, something simple) triggers a mass exodus of their user base.
  • Then the service dies, or becomes a zombie, filled with more and more desperate advertisers and an ever increasing flood of spam as the dying service keeps rewarding executives with MBAs rather than their IT personnel.

Because Stable Diffusion is built as open source, we are seeing an explosion of services offering services based on it, crop up practically daily. A lot of those services are trying to discover how to stand out compared to others, so we have a unique opportunity to see how the enshittification can progress in the Open Source Generative AI ecosystem.

We have services at the first stage, like CivitAI which offer an amazing service to their user-base, by tying social media to Stable Diffusion models and fine-tunes, and allowing easy access to share your work. They have not yet figured out their business plan, which is why until now, their service appears completely customer focused.

We have services, like Mage.space which started completely free and uncensored for all and as a result quickly gathered a dedicated following of users without access to GPUs who used them for free AI generations. They are progressing to the second stage of enshittification, by locking NSFW generations behind a paywall, serving adverts and now also making themselves more valuable to model creators as soon as they smelled blood in the water.

We do not have yet Stable Diffusion services at the late stage of enshittification as the environment is still way too fresh.

Fascinatingly, the main mistake of Fantasy.ai is not their speed run through the enshittification process, but rather attempting to bypass the first step. Unfortunately, fantasy.ai entered late in the Generative AI game, as its creator is an NFT-bro who wasn’t smart enough to pivot as early as the Mage.space NFT-bro. So to make up the time, they are flexing their economic muscles, trying to make their service better for their business partners (including the model creators) and choking their business rivals in the process. Smart plan, if only they hadn’t skipped the first step, which is making themselves popular by attracting loyal users.

So now the same user-base which is loyal to other services has turned against fantasy.ai, and a massive flood of negative PR is being directed towards them at every opportunity. The lack of loyalty to fantasy.ai through an amazing customer service is what allowed the community to more clearly see the enshittification signs and turn against them from the start. Maybe fantasy.ai has enough economic muscle to push through the tsunami of bad PR and manage to pull off step 2 before step 1, but I highly doubt it.

But it’s also interesting to see so many model creators being so easily sucked-in without realizing what exactly they’re signing up for. The money upfront for an aspiring creator might be good (or not, 150$ is way lower than I expected), but if fantasy.ai succeeds in dominating the market, eventually that deal will turn to ball and chain, and the same creators who made fantasy.ai so valuable to the user-base, will now find themselves having to do things like bribe fantasy.ai to simply show their models to the same users who already declared they wish to see them.

It’s a trap and it’s surprising and a bit disheartening to see so many creators sleepwalking into it, when we have ample history to show us this is exactly what will happen. As it has happened in every other instance in the history of the web!

AI-powered anti-CSAM filter for Stable Diffusion

One of the big problems we’ve been fighting against since I created the AI Horde was attempts to use it to generate CSAM. While this technology is very new and there’s a lot of question to answer on whether it even is illegal to generate CSAM for personal use, I erred on the safe side and made it a rule from the start, that the one thing that is going against the AI Horde, is such generated content without exceptions.

It’s is not a overstatement to say I’ve spend weeks of work-hours on this problem. From adding capabilities for the workers to set their own comfort level through a blacklist and a censorlist and a bunch of other variables, to blocking VPN access, to the massive horde-regex filter that sits before every request and tries to ascertain from the prompt sent whether it intends to generate CSAM or not.

However the biggest problem is not just pedos, it’s is stupid, but cunning pedos! Stupid because they keep trying to use a free service which is recording all their failed attempts without a VPN. Cunning because they keep looking for ways to bypass our filters.

And that’s where the biggest problem lied until now. The regex filter is based on language which is not only flexible about the same concept, but very frustratingly, the AI is capable of understanding multiple typos of various words and other languages perfectly well. This strains what I can achieve with regex to the breaking point, and led to a cat&mouse game where dedicated pedos kept trying to bypass the filter using typos and translations, and I kept expanding the regex.

But it was inherently a losing game which was wasting an incredible amount of my time, so I needed to find a more robust approach. My new solution was to onboard image interrogation capability to the worker code. The way I go about this is by using image2text, AKA image interrogation. It’s basically AI Model which you feed an image and number of words or sentences and it will tell you how how well each of those words is represented in that image.

So what we’ve started doing is that every AI Horde Worker will now automatically scan every image they generate with clip and look for a number of words. Some of them are looking for underage context, while some of them are looking for lewd context. The trick is detecting one, or the other context is OK. You’re allowed to draw children, and you’re allowed to draw porn. It’s when these two combines that we filter goes into effect and censors the image!

But this is not even the whole plan. While the clip scanning on its own is fairly accurate, I further tweaked my approach by taking into account things like the value of other words interrogated. For example I noticed that when looking for “infant” in the generated image pregnant women would also have a very high rating for it, causing the csam-filter to censor out naked pregnant women consistently. My solution was then to also interrogate for “pregnant” and if the likelihood of that is very high, adjust the threshold to hit infant higher.

The second trick I did was to also utilize the prompt. A lot of pedos were trying to bypass my filters (which were looking for things like “young”, “child” etc) by not using those words, and instead specifying “old”, “mature” etc in the negative prompt. Effectively going the long route around to make Stable Diffusion draw children without explicitly telling it to. This was downright impossible to block using pure regex without causing a lot of false positives or an incredible amount of regex crafting.

So I implemented a little judo-trick instead. My new CSAM filter now also scans prompt and negative prompt for some words using regex and if they exist, also slightly adjusts the interrogated words based on the author intended. So let’s say the author used “old” in the negative prompt, this will automatically cause the “child” weight to increase by 0.05. This may not sound by a lot, but most words tend to variate from 0.13 to 0.22, so it’s actually has a significant chance to push a borderline word (which it would be at a successful CSAM) over the top. This converts the true/false result of a regex query, into a fine-grained approach, where each regex hit reduces the detection threshold only slightly, allowing non-CSAM images to remain unaffected (since the weight of the interrogated word would start low) while making more likely to catch the intended results.

Now the above is not the perfect description of what I’m doing, in the aim of keeping things understandable for the layperson, but if you want to see the exact implementation you can always look at my code directly (and suggest improvements 😉 ).

In my tests, the new filter has fairly great accuracy with very few false positives, mostly around anime which makes every woman look extraordinarily young as a matter of fact. But in any case, with the amount of images the horde generates I’ll have plenty of time to continue tweaking and maybe craft more specific filter for the models of each type (realistic, anime, furry etc)

Of course I can never expects this to be perfect, but that was never the idea. No such filter can ever catch everything, but what my hope is that this filter, along with my other countermeasures like the regex filter, will have enough of a detection rate to frustrate even the most dedicated pedos off of the platform.

Merging of the Hordes. The AI Horde is live!

A while back (gosh, It occurs to me this project is half a year old by now!) I took significant steps to join the two forks I had made of the AI Horde (one for Stable Diffusion and one for Kobold AI) as they diverging code was too difficult to maintain and keep up to parity with features and bug fixes I kept adding.

Then later on, I realized that my code just could not scale anymore, so I undertook a massive refactoring of the code-base to switch to an ORM approach. Due to the time criticality of that refactor (at the time, the stable horde was practically unusable due to the sheer load), I focused on getting the stable horde API up and running and disregarded KoboldAI API, as that was running stable on a different machine and didn’t have nearly as much traffic to be affected.

Once that was deployed a number of other fires had to be constantly be put out and new features on-boarded as Stable Diffusion is growing by leaps and bounds. That meant I never really had a time to onboard the KoboldAI to the ORM as well, especially since the code required refactor to allow two types of workers to exist.

Later on, I added Image Interrogation capabilities as well, which incidentally required that I set up the horde to handle multiple types of workers. This lead me to figuring out how to do ORM class inheritance (which required me figuring out polymorphic tables and other fun stuff) but it also meant that a big part of the groundwork was laid to allow me to add the text workers (which is the kind of thing that does wonder to get my ADHD brain to get over its executive dysfunction).

Since then, it’s been constantly on the back of my mind that I need to finally do the last part and merge the two hordes into a single code base. I had kept the KAI horde into a single lonely branch called KAI_DO_NOT_DELETE (because I deleted the other branch once during branch cleanup :D) and the single-core horde node running. But requests for improvements and bug fixes on the KAI horde kept coming, and the code base was so diverged by now, that it was quite a mess to even remember how to update thing properly.

The final straw is when I noticed the traffic to the KAI Horde had also increased significantly, probably due to the ease of using it through KoboldAI Lite. It was getting closer and closer to the point where the old code base would collapse under its own weight.

So it was time. I blocked my weekend off and started the 4th large refactoring of the AI horde base. The one which would allow me to use the two horde types which were mutually exclusive in the past, at the same time.

This one meant a whole new endpoint, new table polymorphism and going through all my database functions to ensure that all the data is fetched from all types of polymorphic classes.

I also wanted to make my endpoints flexible as well, so it occurred to me it would be better to to have say api/v2/workers?type=text instead of maintaining api/v2/workers/image and api/v2/workers/text independently. This in turn run into caching issues, as my cache did not recognize the query part to store independently (and I am still not sure how to do it), so I had to turn to the redis cache.

That in turn caused by bandwidth to my redis cache to skyrocket, so now I needed to implement a local redis cache on each node server as well, which required rework for my code to handle two caches at the same time. It was a cascading effect of refactoring 😀

Fortunately I managed to get it all to work, and also updated the code for the KoboldAI Client and its bridge to use the new and improved version2 of the API and just yesterday, those changes were merged.

That in turn brought me to the next question. Now that the hordes were running together, it was not anymore accurate to call it “stable horde”, or “koboldai horde”. I had already foreseen this a while ago and I had renamed my main repo to the AI Horde. But I now found the need to also serve all sorts of generative AI content from the main server. So I made the decision to deploy a new domain name. And the AI Horde was born!

I haven’t flipped all the switches needed yet, so at the moment the old https://stablehorde.net is still working, but the eventual plan is to make it simple redirect to https://aihorde.net instead.

The KAI community is happy and I’m not anymore afraid they’re going to crash and burn from a random DB corruption and they can scale along with the rest of the Horde.

Now onward to more features!

New Discord Bot for the Stable Horde

For a few months now the Stable Horde has had its own Discord bot, developed by JamDon. One of the most important aspects I wanted for the bot (and the reason for its original creation), was the ability to be able to gift kudos to people via emojis, which would serve as a way to promote good behavior and mutual aid.

In the process, the the bot received more and more features, such as receiving the functionality of being able to generate images from the Stable Horde, or getting information about the linked horde account etc.

Unfortunately development eventually slowed and then 2 months ago or so, ago JamDon informed me that they do not have time anymore to continue development. Further complicating things was the fact that the bot was written in JavaScript which I do not speak, which made it impossible for me to continue its development on my own. So it languished unmaintained, as the horde got more and more features and other things started changing. It was the reason why I couldn’t make the “r2” payload parameter true by default for example.

The final straw was when our own bot got IP banned by the horde because it was a public bot and had been added to a lot of servers, which we do not control. And apparently people there attempted to generate unethical images, which the horde promptly blocked. Unfortunately that meant that the bot image generation also stopped working everywhere every time this happened.

At the same time, another discord regular had not only developed their own discord bot based on the stable horde, but a whole JavaScript SDK! The bot was in fact very well developed and had most of the features of the previous stable horde bot plus a lot of new stuff like image ratings. The only thing really missing which was really important, was the ability to gift images via emojis, which was the original reason to get as discord bot in the first place 🙂

Fortunately with some convincing and plenty of kudos, zelda_fan agreed to onboard this functionality, as a few other small things that I wished for (like automated roles), and the Stable Horde Bot was reborn!

Unfortunately this did mean that all existing users were logged out and had to log in once more to be able to use the functionality, and it’s commands did change quite significantly, but those were fairly minor things.

Soon after the new bot was deployed, it was also added to the official LAION discord as well, so that their community could use it to rate images as well. I also checked and the bot has been already added to 365 different servers by now. Fortunately its demand is not quite as massive as it’s not prepared to scale quite as well as the stable horde itself.

BTW If you want to add the bot to your own discord server, you can do so by visiting this link. If you want to be able to transfer kudos, you’ll need to contact me so I onboard your emojis though. But other functionality should work.

The image ratings are flooding in!

It’s been about 2 weeks since we deployed the ratings system to gather data for LAION and once the main UIs on-boarded them, they’ve been coming in at an extraordinary pace!

So I thought I’d share some numbers.

Amount of ratings per client

 count  |     client
--------+-----------------
 175060 | ArtBot
    461 | Discord Bot
    159 | Droom
   4124 | Lucid Creations
   2545 | Stable UI
   9430 | UnknownCode language: SQL (Structured Query Language) (sql)

As you can see, the Artbot ratings are crushing everything else, but to also be fair, Artbot was the first to integrate into the ratings system and is also a very popular UI. The Lucid Creations has added post-generation ratings, but not yet generic ratings. Stable UI and Discord Bot only added ratings a couple days ago. Things are about to pick up even more!

Artifacts per client

 count |     client
-------+-----------------
 15308 | ArtBot
     0 | Discord Bot
     0 | Droom
  1073 | Lucid Creations
  2550 | Stable UI
     2 | Unknown
(6 rows)

And we also can see how many artifact ratings each client has done. Artifacts were added only a couple days ago as well. Still ArtBot dominates, but only by a single order of magnitude instead of two 😀

Rated images count

 count | ratings_count
-------+---------------
 71197 |             1
  7860 |             2
 30140 |             3
  3644 |             4
    71 |             5
    11 |             6
     2 |             7
(7 rows)

 count
--------
 112928
(1 row)

The amount of images which have received at least 1 ratings is very heartwarming. We have 112K images rated with at least 1 rating, of which 30K have 3 rating each!

Total ratings count

 count
--------
 191914
(1 row)

This is just the raw amount of ratings submitted. Almost 200K between generic ratings and post-generation ratings in ~14 days. That is ~13K ratings per day! For free! Just because people want to support something good for the commons!

This is the power of building a community around mutual aid! People love telling others what they think of their work (just see how many ratings midjourney has received) and people also love supporting an endeavour which through FOSS and transparency, ensures they are not being taken advantage to enrich a few at the top!

This is exactly what my dream was for this project. To unite a community around helping each other, instead of benefiting a corporation at the top. And it is working better than I ever expected in this short amount of time!

Y’all are awesome!

Stable Horde receives stability.ai processing power!

A week ago I mentioned that we had begun a collaboration with LAION to provide them with ratings on images. The amount of ratings we have received since then has blown away all our expectations! In just a week, you’ve all rated close to 130.000 individual images! As a comparison, the LAION-aesthetics v2, which was instrumental for training Stable Diffusion v1.x, used less than 600K rated images. We’ve reached 1/4 of that amount in a week!

Needless to say, these amounts seemed to turn some heads to the power of mutual aid provided by the stable horde, and some gears were set in motion.

LAION spoke with stability.ai directly and arranged that it would likewise benefit them to support the health of the stable horde itself. Since stability.ai is set to be the most direct beneficiaries of a better trained the laion-aesthetics v3 it makes perfect sense.

I was not privy to the discussions that happened, but I was happy to learn that Tom, the CTO of stability.ai arranged to provide us with some sponsored resources in the form of 4 VMs with RTX4000s Nvidia GPUs!

Quite surprisingly I had to deploy the VMs myself, so I crafted the most optimal setup for taking advantage of those 8Gb of VRAM through my experience with my own RTX2070. Each of them has been loaded with standard stable_diffusion 1.5 and 2.1 and each of them then has 8-10 other finetuned models to help cover the versatility provided by the Stable Horde. Granted, we are serving close to 100 different models currently, but the fact that those workers will remain running consistently 24/7, should help provide cover and allow other workers to switch to less supported models as well.

I hope this is the start of a fruitful collaboration between the stability.ai and the Stable Horde. The way I see it, the current scenario is a win-win for everyone. We get a more consistent service which allows more people to use it and makes them more likely to rate images to give back, which are then fed back to LAION and by extension stability.ai.

The Stable Horde has its first chrome extension!

About a week ago I deployed image interrogation to the stable horde, allowing low-powered GPUs and high-powered CPUs to also be able to become productive contributors on the horde and generate kudos for their owners.

A few days ago, the extension I talked about was finally released once more, relying on the Stable Horde this time: GenAlt

GenAlt is an extension that allows visually impaired people to generate alt-text for any image they encounter on the internet, giving them freer access to an area they were previously excluded. The extension’s description goes more into length about its stated purpose so I urge you to share it so that people who need it can find it

The first release of the extension was setup to automatically pick up every image displayed in the webpage and send them over to the horde for captioning it. That mean that simple scroll through twitter would lead to hundreds of images being sent to the horde for captioning per person!

That in turn led to the stable horde ending with 2000-4000 images to interrogate in its queue. Even with my own worker handling 20 threads at a time, it was just impossible to clear them all, which effectively meant the interrogation service became unusable. To top it off, as the stable horde started deleting expired interrogations, the extension received 404 responses, but unfortunately didn’t take that as a sign to abort polling for them.

At one point we had almost maxed out our available connections to each stable horde backend. But fortunately we kept chugging without much impact. It was one hell of a stress test though!

So I asked the developer to switch it to be triggered with a button or an image-hover action, which while not as user friendly, certainly wouldn’t completely flood the horde. That change (along with fixing the 404s) was finally deployed yesterday and that took care of the flooding issue.

An example of the GenAlt new trigger context menu

Now finally the horde is easily handling the captions as they trickle in at a controllable amount. The developer is planning some more updates, such as triggering it on mouse-hover instead of a specific context menu button, which is not as easy to access, and possibly we can onboard translating the captions before we send them back.

A collaboration begins between Stable Horde and LAION!

last week I wrote how we started creating a new dataset of stable horde images to provide to LAION. Today I am proud to announce that we have further deepened our collaboration by setting up a mechanism which will allow the Stable Horde community to contribute dataset aesthetic ratings for LAION datasets!

Me along with hlky from Sygil.dev have used the last weekend to deploy a new service which allows us to aesthetically rate images from LAION’s multiple datasets. We deployed an API and thus allowed any client to interact with it. You can read the details of how it works on the blog I linked above, so I’m not going to repeat everything.

This is exciting for me because the Stable Horde has suffered from a distinct lack of visibility. None of the major AI-focused media (newsletters, YouTubers etc) have mentioned us to date. The very first coverage we got was from a PC magazine!

All that is to say that it’s been an uphill struggle to get the Stable Horde noticed in a way that will lead to more workers which will allow us to democratize access to AI for everyone. So I am very happy to pivot the amazing stable horde community in such a positive work which will bring more attention to what we’re trying to achieve.

We are still hard at work tweaking the information we store for each rating. For example we store the amount of images they had generated at the time of the rating, which will allow researches to filter out potentially spammy users.

We are also adding more and more countermeasures, as there’s always the fear that someone will just script random ratings to get kudos. Even though the Stable Horde is free to use without kudos and even though kudos has no value, people do strange things to see “numba go up”. Now I don’t particularly care if people harvest kudos like this, but I very well do care about our ratings being poisoned by garbage.

So if you’re someone who wants to make an exploit script to harvest kudos via ratings, please just join our discord instead. The kudos flow like candy when you’re active! And you will also not be harming the AI community itself.

Already our exported dataset has grown to 80K shared images. We have 20K ratings on the LAION datasets within 2 days. For comparison some of the biggest rated datasets have just 175K ratings which were done by paid workers (and we all know how motivated they are to be accurate). Our kudos incentives and community passion to improve AI is surprising even my wildest expectations to be honest!

Here’s to making the best damn dataset that exists!

Sharing is Caring

For a while now I’ve been discussing with LAION on a way to use the power of the horde to help them in some fashion. After coordination with hlky from the Sygil.dev crew, I decided to provide an opt-in mechanism for people to store their text2img stable diffusion generations in an alternative storage bucket. This bucket in turn will be provided to LAION so that they can use it for aesthetic training, or for other similar purposes.

So today I finally released this new mode. For clients it’s a simple flag during the payload. Set “share” to True, and your request will be uploaded to the specific storage bucket. This will also save me infrastructure costs as I will not have to pay for the storage out of my own pocket for these.

To give a further incentive for people to turn this on, and because I wanted a way to show the cost of running the horde, I have also implemented a “horde tax” kudos burn. Every time you generate an image, the overall kudos cost is then increased by 3. This signifies the overall resource cost of passing through the horde, such as bandwidth, i/o and storage. However, if you opt to turn on the sharing switch, the overall “tax” is just +1 kudos.

You might ask, why not make the cost proportional to the overall kudos cost. Something like +30%/+10%. The reason is that the overall kudos cost is dependent on how difficult it is for the workers to generate that image. From the perspective of the horde, a 512x512x50 image is not much different from a 1024x1024x300 image, even though the latter would take an order of magnitude more time to generate.

In fact, many small requests are technically worse for the horde infrastructure costs, than a few small ones. It’s not that I want to discourage the small ones though, because they are actually good for the horde workers (and thus the overall generation speed). Therefore the “tax” is fairly trivial in the grand scheme of things. Just a bit of extra “burn”.

One important thing to note however, is that anonymous accounts image generations are always shared. This is part of my general strategy where I want to discourage anonymous use of the horde. It just more difficult to manage the load when people are using it like that. This is why anonymous has the lowest priority and the most restrictions. And now they will always help provide data to LAION as well.

Finally, img2img and inpainting requests are never shared. This is because those are based on existing images and I cannot know if someone used some personal photo at low strength or something. So I prefer to err on the side of caution.

This is not the last support the AI Horde plans to give to LAION either. We are already working on new features like an aesthetic rating trainer and so on. I hope this sort of assistance can be put to good use for the benefit of all humanity!

AI Spam?

I wonder what effect generated text like will have on internet spam. Until now, our anti-spam filters were trained heuristically to try and recognize typical spam patterns, but now spammers could turn into AI models to generate unique text to mask their spammy messages on emails, social media and blogs. Imagine spam links wrapped in elaborate text about any subject you can conceive of. And if the site link behind it is novel each time, it will be increasingly unfeasible for administrators to figure it out.

Not only that, but a lot of sites use minimum active account life to figure out which accounts could be spam. Such accounts are marked based on site interaction and ratings which either forced spammers to buy account, or set up spam farms with people doing nothing but interacting natively on social media until their account was live long enough so that it could bypass spam filter.

Now spammers can simply plug an text AI behind each account and have it run on autopilot for a few months, until they flip on its spam switch, and it transparently starts peppering spam links in its normal posts, which would make the switch almost indistinguishable unless it posts a well known spam site or something. They could even target specific subreddits and train finetuned generative models on the typical posts of users in there, which will even make it fits the typical subreddit pattern extremely well.

I also expect anti-spam to turn into their own AI models to catch them, but I wonder how much this countermeasures can help. It could be that a model could be trained to figure spam patterns a human cannot distinguish, but I’m curious to see how effective it could be. I suspect an easier solution would be to have an AI check the site behind each link and try to figure out if it’s a spam site or not, based on heuristics a human might take too long to figure out.

I have the feeling however that ultimately this is a battle that countermeasure AI is bound to lose. And if that happens, it will start leading to what I suspect will be a internet-wide dissolution of anonymous trust. But I will write about that in a latter post.