is how the enshittification of Stable Diffusion begins has gotten into hot water since its inception, which for a company which is based on the Open Source community, is quite impressive feat on its own.

For those who don’t know, basically goes to various popular model creators and tempts them with promises of monetary reward them for their creative work, if only they agree to sign over some exclusive rights for commercial use of their model, as well as some other priority terms.

It’s a downright Faustian deal and I would argue that this is how a technology that begun using the Open Source ideals to be able to counteract the immense weight of players like OpenAI and Midjourney, begins to be enclosed.

Cory Doctorow penned an excellent new word for the process in which web2.0 companies die – Enshittification.

  • First they offer an amazing value for the user, which attracts a lot of them and makes the service more valuable to other businesses, like integrating services and advertising agencies.
  • Then they start making the service worse for their user-base, but more valuable for their business partners, such as via increasing the amount of adverts for the same price, selling user data and metrics, pushing paid content to more users who don’t want to see it, and so on.
  • Finally once their business partners are also sufficiently reliant on them for income, they tighten their grip and start extracting all the value for themselves and their shareholders, such as by requiring extravagant payment from businesses to let people see the posts they want to see, or the products they want to buy.
  • Finally, eventually, inexorably, the service experience has become so shitty, so miserable, that it breaches the Trust Thermocline and something disruptive (or sometimes, something simple) triggers a mass exodus of their user base.
  • Then the service dies, or becomes a zombie, filled with more and more desperate advertisers and an ever increasing flood of spam as the dying service keeps rewarding executives with MBAs rather than their IT personnel.

Because Stable Diffusion is built as open source, we are seeing an explosion of services offering services based on it, crop up practically daily. A lot of those services are trying to discover how to stand out compared to others, so we have a unique opportunity to see how the enshittification can progress in the Open Source Generative AI ecosystem.

We have services at the first stage, like CivitAI which offer an amazing service to their user-base, by tying social media to Stable Diffusion models and fine-tunes, and allowing easy access to share your work. They have not yet figured out their business plan, which is why until now, their service appears completely customer focused.

We have services, like which started completely free and uncensored for all and as a result quickly gathered a dedicated following of users without access to GPUs who used them for free AI generations. They are progressing to the second stage of enshittification, by locking NSFW generations behind a paywall, serving adverts and now also making themselves more valuable to model creators as soon as they smelled blood in the water.

We do not have yet Stable Diffusion services at the late stage of enshittification as the environment is still way too fresh.

Fascinatingly, the main mistake of is not their speed run through the enshittification process, but rather attempting to bypass the first step. Unfortunately, entered late in the Generative AI game, as its creator is an NFT-bro who wasn’t smart enough to pivot as early as the NFT-bro. So to make up the time, they are flexing their economic muscles, trying to make their service better for their business partners (including the model creators) and choking their business rivals in the process. Smart plan, if only they hadn’t skipped the first step, which is making themselves popular by attracting loyal users.

So now the same user-base which is loyal to other services has turned against, and a massive flood of negative PR is being directed towards them at every opportunity. The lack of loyalty to through an amazing customer service is what allowed the community to more clearly see the enshittification signs and turn against them from the start. Maybe has enough economic muscle to push through the tsunami of bad PR and manage to pull off step 2 before step 1, but I highly doubt it.

But it’s also interesting to see so many model creators being so easily sucked-in without realizing what exactly they’re signing up for. The money upfront for an aspiring creator might be good (or not, 150$ is way lower than I expected), but if succeeds in dominating the market, eventually that deal will turn to ball and chain, and the same creators who made so valuable to the user-base, will now find themselves having to do things like bribe to simply show their models to the same users who already declared they wish to see them.

It’s a trap and it’s surprising and a bit disheartening to see so many creators sleepwalking into it, when we have ample history to show us this is exactly what will happen. As it has happened in every other instance in the history of the web!

depth2img now available on the Stable Horde!

Through the great work of @ResidentChiefNZ The Stable Horde now supports depth2img, which is a new method of doing img2img which better understands the source image you provide and the results are downright amazing. This article I think explains it better than I could.

See below the transformation of one of my avatars into a clown, a zombie and a orangutan respectively.

To use depth2img you need to explicitly use the Stable Diffusion 2 Depth model. The rest will work the same as img2img. 

Warning, depth2img does not support a mask! So if your client allows you to send one, it will just be ignored.  

If you are running a Worker you can simply update you bridge code and you must update-runtime as it uses quite a few new packages. Afterwards add the model to your list as usual. 

We recently also enabled diffusers to be loaded into voodoo ray, so this will allow you to not only keep the depth2img in RAM along with other models, but also the older inpainting model! Please direct all your kudos to @cogentdev for this! I am already running both inpainting, depth2img, sd2.1 and 15 other 1.5 models on my 2070 with no issues!  

If you have built your own Integration with the stable horde such as clients or bots, please update your tools to take into account depth2img. I would suggest adding a new tab for it, which forces Stable Diffusion 2 Depth to be used and prevents sending an image mask. This is to avoid confusion. This will also allow you the opportunity to provide some more information about the differences between img2img and depth2img.  

Enjoy and please shower the people behind the new updates with Kudos where you see them!

The Stable Horde is in the news!

A new article has been published in about the Stable Horde!

Overall a very well researched article. I can’t find any issues with it. Personally I would liken the AI Horde technology as a mix between BitTorrent and Folding@Home, but the former has some negative connotations for many people.

Some things I could address from the article

It’s not entirely clear whether every fork of Stable Diffusion should work, but you can try.

There’s no “forks” of stable diffusion. There’s checkpoints and multiple models and the horde supports every .ckpt model and some diffusers models. I suspect the author confused Stable Diffusion the model, with clients and frontends using it, like automatic1111.

There is a tiny bit of a catch: the kudos system. To prevent abuse of the system, the developer implemented a system where every request “costs” some amount of kudos. Kudos mean nothing except in terms of priority: each request subtracts kudos from your balance, putting you in “debt.” Those with the most debts get placed lowest in the queue. But if there are many clients contributing AI art, that really doesn’t matter, as even users with enormous kudos debts will see their requests fulfilled in seconds.

Indeed each request consumes kudos to fulfill, but you don’t actually go in debt. While we do record the historical amount of kudos you’ve consumed for statistics, your actual total as a registered user never goes below 0. This means as a registered user, you will always have more priority than an anonymous user (who typically remains at -50 kudos). Your kudos minimum also allows you to generate with slightly higher resolution and steps than an anonymous user.

Images won’t automatically download, but you can go to the Images tab and then manually download them.

That it totally dependent on the client. It works this way for Artbot, but Lucid Creations for example is a local application, so the images are saved with a button click. Other clients might save automatically.

Other than that, great article!

To be honest, I’ve been actually quite surprised that nobody has written about the SH until now. The SH went live in early September, soon after Stable Diffusion came out, and we’ve generated 13 Million images until now (or approximately $50K of value) but none of the big AI and AI Art focused news reports has given a single mention of it! Now, I am not one for conspiracy theories, but it sounds extraordinary unlikely that absolutely nobody in the scene has noticed us until now or felt we are newsworthy, especially since many people have directly tweeted to some of the big AI and Stable Diffusion players about it.

Oh well, a PC-magazine is the first to report on the Stable horde. So be it! I wonder how many people will discover the Stable Horde from it.

The Stable Horde: AI image generation for everyone through mutual aid

After completing the KoboldAI Horde, and onboarding into the KoboldAI client, I felt that there is a really big opening for doing a similar thing using the open sourced AI image generating model, Stable Diffusion. I already have the code for setting up a crowdsourcing cluster, so it shouldn’t take too much refactoring to make the same underlying code work with Stable Diffusion.

The first thing I had to do was figure out what is going to run on the workers. For this, I decided to reuse the stable-diffusion-webui fork by simply adding my bridge code on top of it (as it doesn’t provide a REST API like KoboldAI). Once I had a valid bridge, it was time to fork the Horde.

And thus, the Stable Horde was born!

It follows the same approach, where workers running some version of Stable Diffusion constantly poll for new generations to complete and then send it back to the horde to hand it over to their final destination. For now the stable horde is only handling fairly basic text2image generations, but since it’s based on the webui, I can tap into the features that added upstrean much easier, without having to develop them myself.

The code started as a fork of the Stable Horde, but has by now become my primary repo. In fact, with the addition of the second version of the REST API, I have decided to merge both Hordes into a single repository in order to better share code updates (because copying code from one repo to the other was driving me nuts!). This is coming soon, and it means that the Stable horde will always remain in parity with the KoboldAI horde from now on.

While there are other free image generation tools out there, I believe none is doing anything like what I am attempting. Most of these are based on providing free Stable Diffusion by eating the costs themselves, but with an undefined business plan. And when I see that, my suspicions are already raised, as a free service like that, typically means you’re the product! It also doesn’t help at all that they are not sharing the code behind them.

Now you might say, “But db0, your service is also free, how come the same criticism doesn’t apply to you?”. Which is a great question. The answer is that the reason the Stable Horde is free is because it’s volunteer based. That means, at the end of the day, someone is indeed paying for electricity (that is, myself primarily atm), but the point is that it is self-managing through people’s innate drive for mutual-aid.

That means that if I get a jump in popularity, which in turn exceeds the Horde’s current image generation capacity (and therefore slowing things down too much), the belief is that there will be enough people annoyed by the speed, that they will join their own power to the horde to benefit themselves with higher priority, but also everyone else.

And yes, there is always some amount of “small print”. While the Stable Horde is built on anarchistic principles of mutual aid and direct action, the fact is that we do not control the underlying workers. Therefore it is theoretically possible for people to act maliciously on the worker side, which is why I always warn people who will use the Horde that I cannot guarantee that nobody will see your prompts. So act accordingly.

Nevertheless, one of the things I’m offering is something that I just haven’t seen anyone else do for image gen, and that is a fully functioning RESTful API. The purpose of this it to further enable image generation for everyone in new and exciting ways that enable tools to use this capability without bankrupting their owners for a side hobby whose demand suddenly spiked. Already people have started creating some interesting tools, such as a weather app which uses the Stable Horde to generate a dynamic image representing the weather, based on environmental conditions.

On my end, I am interested in helping game developers figure out ways to implement AI into their games. For this purpose I have already released a Godot Add-On which allows you to request AI image generation during a game’s runtime. I have further used this add-on to create my own Stable Diffusion GUI client that can run on any device, without the need for a complicated install procedure, or a GPU.

All of this is just scratching the potential of what can be achieved by allowing automation to connect directly to Stable Diffusion (or text generation), and I’m excited to see what people will come up with in the future!