Watching through Thought Slime’s latest video it occurred to me that people who dislike generative AI art, don’t even understand those who do enjoy it and find nothing wrong with it. In this video, Thought Slime came up with a pithy comeback to the strawman they erected: “No longer are you gate-kept by the extremely costly fee of…a pencil.” (2:10) they say mockingly.
The truth is that that as the rest of that video explained, there is, in fact, a very high cost associated with doing manual art well: Time.
Doing decent art, even if mediocre from the perspectives of experienced artists, can easily take years of one’s life and there’s a lot of people who just don’t have this amount of time. Like it or not, the capitalist system we live in is exploiting all of us, and most people will never have the time to follow this passion.
So this mocking strikes me as not only misguided but also extremely privileged. Thought Slime has been fortunate enough to land onto the crossroads of skill, talent, time, race and upbringing that let them make a successful career out of making Youtube videos and Twitch streams. They are now turning towards all the wage slaves of the world using Generative AI and declare “you know that fun thing you like to do for yourself? Stop doing it unless you’re willing to sacrifice as much as I did”.
“Stop having fun unless you have money, or time”
Sure, people will argue, “but if you like art, then making art should be fun for you!”.
This is nonsense!
Not everyone likes doing art. Some people just like consuming art. Some people don’t have time to do learn to do manual art at all. Some people are neurodivergent or disabled in certain ways which forbid them from doing it altogether.
People like seeing pretty things. Do you know what they like most of all? Seeing pretty things that came from their own brain. Imperfection of the final result is besides the point. Generative AI allows people to do this quickly and painlessly, with almost every kind of PC one has available.
So some people just like to put something close to what they have in the thoughts onto (digital) paper and show others and say “this is what I thought of, what do you think”? There’s nothing wrong with that. There’s no need to go to these people and tell them that they’re bad people unless they have a couple thousand hours or more to spend.
I saw, If you just like putting your own thoughts onto drawings with minimum fuss, go nuts. There’s literally no harm in it. And if you like making art, then do that. If you like both making art but want to do it faster, then combine your skills with Generative AI get the best of both!
The AI Horde has turned one year old. I take a look back in all that’s happened since.
Can you believe that I published the first version of the AI Horde exactly 1 year ago? The first version published was called the KoboldAI Horde as it was built around KoboldAI Client specifically, and it had very little traction. But almost as soon as I built it, Stable Diffusion came out, I forked the project to handle SD, and my life was never the same again!
Since the start of its life the AI Horde has generated ~84M images and ~30M texts, free for everyone! We’ve been making and will continue making hundreds of thousands of images and text per day. We now have literally dozens of third party UI and bots integrated directly into the AI Horde.
A quick recap of all that’s happened this year
Sept 2022 – KoboldAI Horde is launched. KoboldAI gets integrated Horde support. Stable Horde is later forked from KoboldAI horde and launched. Lucid Creations is published.
Oct 2022 – We get our first raid. First countermeasures are developed. Artbot is published. Stable UI is published 1 hour later. Our first official discord bot is published. Img2Img is developed. Stable Horde fulfills 1 Million requests.
Nov 2022 – Teams are added. Post-processing is developed, Mastodon bot is launched.
Dec 2022 – Stable Horde reaches its limits and is massively refactored to scale better. Reddit bot is launched. KoboldAI lite is launched. Stable Horde is in the news for the first time
Jan 2023 – Image interrogation is developed. First third party mobile app is launched. Worker UI is developed. We start collaborating with LAION to gather Aesthetic ratings. The first third party chrome extension is launched. We start collaborating with Stability.ai. We replace the discord bot with different codebase.
Feb 2023 – Stable Horde and KoboldAI Horde are merged into the single unified AI Horde.
Mar 2023 – AI CSAM filter is developed. First “State of the AI Horde” is published. Ratings receive bespoke countermeasures.
Apr 2023 – AI Horde breaks away from the nataili inference developer due to toxicity.
May 2023 – AI Horde switches from the nataili backend to using comfyUI as a backend. Shared keys are developed. Massive documentation update. Ratings are published onto Huggingface. LoRa support is added.
Jun 2023 – AI Horde starts supporting all LoRas in CivitAI. hordelib receives a DMCA from the previous nataili developer
Jul 2023 – hordelib is recovered from DCMA limbo. SDXL Beta is added on the AI Horde with collaboration from stability.ai. Haidra is announced and we become part of Nivenly
Aug 2023 – Textual Inversion support is added. AI starts supporting all TIs on CivitAI
Writing this down, just makes me realize how much stuff has been happening constantly! Not a single month where nothing much happened. The most relaxed would be Dec-Jun where I spent basically 1 month in bed sick, but I still managed some big results!
The birthday celebration event is still ongoing at the time of publishing this in the AI Horde discord where I’m staying online in a voice chat and answer questions or just chat with people. We have plenty of threads for generating and sharing art etc.
I also asked the AI Horde community to tell me what the AI Horde means to them, and I wanted to quote some of them here:
The AI Horde’s (completely free and open) Image and Text generation services rely on volunteers donating their computing resources, and those volunteers do so without profit motive or selfish reasons. The result is huge number of people – tens of thousands, at least – have been able to get access to a technology that may be otherwise of their reach. I find this to be a hugely inspiring display of altruism and goodwill on the part of everyone involved, and I am deeply humbled and honored to be part of that ecosystem of people
Tazlin – AI Horde backend developer
AI Horde took KoboldAI from personal software to an online platform and allowed us to build a version people don’t have to install. Now its not just used for personal use, but also as a testing platform by model developers for their latest models before public releases.
Henky – KoboldAI Lead
Thanks to Stable Horde, I can generate new storyboard images for existing film concepts making the process of hashing out scenes and sequences for film projects easier. I’m thankful for everyone involved, the contributors and software developers for the tools, and to db0 for making all of this possible.
I think horde is a great way to make AI accessible to the general public without the need for complicated installs or expensive hardware. It’s certainly a great and low friction way to get people introduced to generative AI without the limitations and restrictions of closed off/ corporatized AI services like chatgpt… it’s been a vital backbone of the FOSS AI ecosystem over the past year.
Concedo – KoboldAI Lite developer
AI Horde is a powerful tool for breaking the monopoly of large corporations and providing AIGC with an independent platform to avoid the risk of being controlled by these corporations. This platform offers more affordable, free, and diverse AI services. Many AI drawing service providers now operate their own GPU cloud services and charge users high fees while controlling, castrating, and modifying AI drawing models, which severely infringes on user rights. The emergence of AI Horde allows AIGC to retain all potential while providing new ideas and solutions for expanding AI application scenarios, making it easier for ordinary people to access various AI technologies.
Pawkygame – AIpainter developer
For me, the whole community that’s been built around the Horde highlights some of the best about free and open source software. There are so many enthusiastic and knowledgeable people who are wiling to help each other out or discuss the latest in the world of generative AI. It’s been an awesome community to be a part of and one of my favorite little corners of the internet.
Rockbandit – Artbot developer
Stable Horde is an excellent tool for generating images, useful for creating reference material for traditional or digital art, creating art itself through the platform, or using it as a starting point to learn about AI. And the fact that it’s a free platform for image generation is a significant blow to companies that seek to monetize AI access.
It’s great for those who lack artistic skills like myself, and more skilled artists can use it for inspiration, or take a hybrid approach by making something manually and using image to image generation.
The various distributed workers available means that the barrier for entry for joining the horde is a device with an Internet connection, and I firmly believe the collaborative nature of the Horde would make Tim Burners-Lee proud.
It’s an excellent first step towards a bright future, full of opportunity made possible by free and open access to advanced technology. Now we just have to hope that the rest of humankind doesn’t trip.
The AI Horde is the best way to burn us worker GPU for other’s happiness
While I was planning to re-use much of the code that had to do with the automatic downloading of LoRas, this quickly run into unexpected problems in the form of pickles.
In the language of python, pickles is effectively objects in memory stored into disk as they are. The problem with them is, that they’re terribly insecure. Anything stored into the pickle will be executed as soon as you load it back into RAM, which is a problem when you get a file from a stranger. There is a solution for that, safetensors, which ensure that only the data is loaded and nothing harmful.
However, while most LoRas are a recent development and were released as safetensors from the start, textual inversions (TIs) were developed much earlier, and most of them are still out there as pickles.
This caused a big problem for us, as we wanted to blanket support all TIs from CivitAI, but that opened the gate for someone to upload a malicious TI to CivitAI and then request it themselves through the horde and pwn all our workers in one stroke! Even though technically CivitAI scans uploaded pickles, automated scans are never perfect. All it would take is someone to discover one way to sneak by an exploit through these scans. The risk was way to high for our tastes.
But if I were to allow only safetensors, only a small minority of TIs would be available, making the feature worthless to develop. The most popular TIs were all still as pickles.
So that meant I had to find a way to automatically convert pickles into safetensors before the worker could use them. I couldn’t do it on the worker side, as the pickle has to be loaded first. We had to do it in a secure location of some sort. So I built a whole new microservice: the AI Hordeling.
All the Hordeling does is provide a REST API where a worker can send a CivitAI ID to download, and the Hordeling will check if it’s a safetensor or not, and if not, download it, convert it to safetensor and then give a download link to the safetensor to the worker.
This means that if someone were to get through the CivitAI scans, all they would be able to exploit is the Hordeling itself which is not connected to the AI Horde in any way and can be rebuilt from scratch very easily. Likewise, the worker ensure that they will only download safetensor files which ensure they can’t be exploited.
All this to say, it’s been, a lot more work than expected to set up Textual Inversions on the horde! But I did it!
So I’m excited to announce that All textual inversions on CivitAI are now available through the AI Horde!
The way to use them is the very similar to LoRa. You have to specify the ID or a unique part of its name so that the worker can find them, in the “tis” field. The tricky part is that TIs require that their filename in the prompt, and the location of the TI matters. This complicates matters because the filename is not easy to figure out by the user, especially because some model names have non-Latin characters which one can’t know how they will be changes when saving on disk.
So the way we handle it instead is that one needs to put the CivitAI model ID in the prompt, in the form of “embedding:12345”. If the strength needs to be modified, then it should be put as “(embedding:12345:0.5)”. On the side of the worker, they will always save the TIs using their model ID, which should allow ComfyUI to know what to use.
I also understand this can be quite a bother for both users and UX developers, so another option exists where you allow the AI Horde to inject the relevant strings into the prompt for you. You can specify in the “tis” key, that you want the prompt to be injected, where, and with how much strength.
This will in turn be injected in the start of the prompt, or at the end of the negative prompt with the corresponding strength (default to 1.0). Of course you might not always want that, in case you want to the TI to be places in a specific part of the prompt, but you at least have the option to do it quickly this way, if it’s not important. I expect UX designers will want to let the users handle it both ways as needed.
You are also limited to a maximum of 20 TIs per request, and there’s an extra kudos cost if you request any number of TIs.
So there you have it. Now the AI Horde provides access to thousands of Textual Inversions along with the thousands of LoRa we were providing until now.
Maximum customization power without even a need for a GPU, and we’re not even finished!
With the recent updates to the Fediseer, it is now possible to use it more efficiently for programmatically adjusing one’s blocklist using the built-in censures, so I wanted to add this capability to the Divisions by zero by updating my update_blacklist.py script to utilize this method.
While I could have done this by using simple http requests, I thought it was enough features there that creating a python SDK package around the fediseer API would be worthwhile. So I’m excited to announce the release of pythonseer!
It is built very similar to Pythörhead, so if you’re already using that in your scripts, it should be trivial to add support for the Fediseer via pythonseer as well.
I have already updated the provided script to automatically maintain your blocklists to allow it to make use of the fediseer censures as well. Simply censure the domains you would like to defederate from, and the script will automatically add them to your list along with the suspicious instances. If you have the script to run on a schedule, you can then simply add new censures on the fediseer API and your blocklist will be automatically kept up to date.
Likewise, if you don’t want to maintain all the blocklist yourself, find other like-minded instances and combine their censures with yours. This way whenever they censure someone, you also benefit.
Enterprising fediverse admins could also do other fancy things with the pythonseer, such as creating polls to democratically add instances to their endorsements or censures etc. So do let me know if you come up with new ideas!
One can now request all endorsements from multiple domains:
This allows instances to quickly discover a “list of friends” based on other instances. Use cases for this might include scripts which auto-approve comments in moderation, or automatically update a fediverse instance’s whitelist based on common endorsements.
The current fediseer guarantees are meant to apply only to consideration of spam. As such we do not have a way to mark instance that many would consider terrible in all other ways except spam (e.g. pro-nazi instance, or an instance allowing loli content) as such.
To solve this a new feature has been added: Censures
An instance can now apply a censure to any other instance domain (whether it’s guaranteed already or not) for any reason. This extreme disapproval can come from any subjective reason, but like endorsements it doesn’t, by itself, have any mechanical impact.
In fact, because endorsements and censures are explicitly subjective, I have taken the decision to not display censure counts on instance details to avoid people using them for “objectively” rating instances which is not their intended purpose. This is because one’s instance could easily be censured by a lot of, say, fascist instances, but this should have no overall impact on how non-fascists percive that instance.
Instead, one can see combined censure lists from multiple instances, like one can see combined endorsements. You simply pass a comma-separated list of domains to the /api/v1/censures_given endpoint and you get a list of all the instances they have censured collectively.
This endpoint can then be consumed to make collective blocklists among instances that trust each other, without one having to manually discover and parse different instance federation blocklists all the time.
Likewise, by not being explicitly tied to a blocklist, the censure list could also be used to enforce a softer approach. For example by having an auto-moderator script which flags comments from censured instances for manual review, etc. This could allow an instance to retain a less-restrictive blocklist, but still allow for more rapid response to users coming from potentially problematic instances.
As always, the point of the fediseer is to allow a way to provide the information easily, rather than to dictate how to use it. I excited to see in which ways people will utilize the new abilities.
A while ago I was having a lot of troubles trying to figure out how to make open source development work. It was a completely new endeavour by me and I kept running into payment processors issues. In desperation I wrote about it in Hachyderm, which is my primary mastodon instance and someone suggested to contact Nivenly which is an org created explicitly to provide guidance to such matters for open source software and are also based on Hachyderm.
While I discussions were ongoing with them, work was progressing on the AI Horde itself but I had eventually realized that I can’t keep everything under the AI Horde moniker. The AI Horde itself is the main middle-ware making it all possible, but nobody can use it in isolation. We don’t only need the Worker backends, but we also need the various frontends and bots which the end-users will use.
So I set up a Github org to provide an overarching group for the whole official ecosystem around the AI Horde. I called it Haidra, as is both can portmanteau the “AI” part but it also represents the endless potential between the integrations around the AI Horde. It is also one of my favourite concepts.
Long story short, Haidra is now hosting all the repositories that the public AI Horde needs to run, along with a lot of helper libraries and UIs. It has as members all the people who have provided invaluable support in development or community building. We also designed a neat logo for it using the AI Horde and a competition in our discord server which Tazlin further polished as our new logo. You can see it as the featured image in the post.
While there’s a lot of people helping to make Haidra what it is, most of the community building still falls on me, along with the social outreach. Likewise there’s a lot of questions and problems around governance of an expanding community which are really not my strength. This is where Nivenly comes in.
The plan is that they will help us with best practices on growing and sustaining our community, and finding more volunteers, especially in community management and software development. They will also provide us with governance and legal support as needed. And generally plug any holes which a lot of developers (including with me) don’t have the skills to deal with.
I hope with The backing of the Nivenly Foundation, we can all together take Haidra and the AI Horde to the next level and ensure that Generative AI remains available for everyone forever!
The thing is, network effects are a double-edged sword. People join a service to be with the people they care about. But when the people they care about start to leave, everyone rushes for the exits. Here’s danah boyd, describing the last days of Myspace:
If a central node in a network disappeared and went somewhere else (like from MySpace to Facebook), that person could pull some portion of their connections with them to a new site. However, if the accounts on the site that drew emotional intensity stopped doing so, people stopped engaging as much. Watching Friendster come undone, I started to think that the fading of emotionally sticky nodes was even more problematic than the disappearance of segments of the graph.
With MySpace, I was trying to identify the point where I thought the site was going to unravel. When I started seeing the disappearance of emotionally sticky nodes, I reached out to members of the MySpace team to share my concerns and they told me that their numbers looked fine. Active uniques were high, the amount of time people spent on the site was continuing to grow, and new accounts were being created at a rate faster than accounts were being closed. I shook my head; I didn’t think that was enough. A few months later, the site started to unravel.
This is exactly what is happening to Reddit currently. The most passionate contributors, the most tech-literate users, and the integrators who make all the free tools in the ecosystem around reddit which makes that service much more valuable have left and will never look back.
From the dashboards of u/spez however, things might looks great. Better even! As the drama around their decision making certainly caused a lot more posts and interactions, and the loss of the 3rd party apps drove at least a few users to the official applications.
But this is an illusion. Like MySpace before them, the metric might look good, but the soul of the site has been lost. It’s not easy to explain but since I’ve started using Lemmy full-time, I’ve seen the improvement in engagement and quality in real time. half a month ago, posts could barely pass 2 digits, now they regularly break 3 and sometimes 4 digits. And the quality of the discussions is a pleasure to go through.
I said it before, but reddit was never a particularly good site. Their saving grace was the openness of their API and their hands-off approach to communities. The two things they just destroyed. It’s those 3rd party tools and communities that made reddit like it is. As as the ecosystem around reddit sputters and dies, the one around the Threadiverse is progressing in an astonishing rate.
Not only are the integrators coming from reddit aware what kind of bots and tools are going to be very useful, but a lot of those tools are shut off from reddit and switched to the lemmy API instead, explicitly cannibalizing the quality of the reddit experience. And due to the completely open API of the Threadiverse, those tools now get access to unparalleled access and power.
Sure if you visit reddit currently, you’ll see people talking and voting, but as someone who’s been there from the start, the quality has fallen off a hill and is reaching terminal velocity. But it feels like one’s still flying!
Not just the quality of the posts where only the most superficial meme stuff can rise to the top, not just the quality of the discussion, but even mere vibe of the discussions is just lost.
There’s now significant bitterness and hostility, especially as the mods who were responsible for maintaining the quality, have gone or are being hands off or just don’t have the tools needed to keep up. I’ve heard from multiple people who are leaving even while they were not originally planning to, because the people left over in reddit are just so toxic.
This is a very vicious cycle which will accelerate the demise of that site even further.
A house fire can go from a spark to a raging inferno in less than a minute. The flames consuming reddit are just now climbing up the curtains and it still appears manageable, but it’s already too late. Reddit has reached terminal enshittification and the only thing left for it to do, is die.
In the past month or so, I’ve been collaborating with stability.ai to help them improve the quality of the new upcoming SDXL model by providing expertise around automation and efficiency in quickly testing new iterations of their model using the the AI Horde as middleware.
Today I’m proud to announce that we have arranged with stability.ai that the new SDXL beta will become available on the AI Horde directly, in a form that will be used to gather data for further improvements of the model!
You will notice that a new model is available on the AI horde: SDXL_beta::stability.ai#6901. This is the SDXL running on compute from stability.ai directly. Anyone with an account on the AI Horde can now opt to use this model! However it works a bit differently then usual.
First of all, this model will always return 2 images, regardless of how many you requested. The reason being is that each image is created using a different inference pipeline and we want you to tell us which you think is best! As a result, the images you will create with this model are always shared! This means that the images you generate will be stored and then used for further improvement based on the ratings you provide.
Which ratings? Well each time you generate images with SDXL you can immediately rate them for a hefty kudos rebate! Assuming your client supports it, you should always report back which image is better than the two. You can then optionally also rate each of them for aesthetic ratings (how much you subjectively like them) and for artifacts (how ruined the image is from generation imperfections like multiple fingers etc)
For best results with SDXL, you should generate images of 1024×1024 as that is its native resolution. The minimum allowed resolution for SDXL has therefore been adjusted to this.
Also, as I mentioned before, this beta is primarily going to be used to improve the model, therefore we’re disallowing anonymous usage on this model for the moment. However registering an account is trivial and free and will also give you more priority in your generations!
The most cool factor is that as stability.ai further improves the model, is will automatically become part of this test, without you having to do anything! As such, you will eventually start getting better and better generations from the SDXL_beta on the AI horde. This is why you continued sharing of the quality is very important!
So please go forth and use the new model and let us know how it performs!
It’s high time I wrote one more of these posts to keep everyone up to date. It’s been generally a fairly slow month as far as the Horde is concerned. That’s not tot to mean that we produced less content, but rather that there hasn’t been a lot of progress on features as all the developers appear to have been busy with other projects a lot.
(In case you’re new here and do not know what is the AI Horde: It is a crowdsourced free/libre software Open API which connects Generative AI clients to volunteer workers which provide the inference.)
LoRas in the mix
Since the last State of the AI Horde, we saw the introduction of LoRas into payloads allowing everyone access to all LoRas in CivitAI at the click of a button. I have likewise been recording the amount of times a lora is being used, storing it by its CivitAI ID. Below you can see the top25 LoRas used since we started recording them. You can check which one it is by adding the number to this URL https://civitai.com/models/<LORA ID>. Using this method we can see that the Add More Details LoRa is clearly one of the most valuable one following closely by Details Tweaker. People do love adding more details!
All in all, a total 801.326 LoRas have been recorded being used successfully in the AI Horde!
In image news, our usage remains fairly stable, which is fairly impressive if one considers just how much extra slowdown is added by all these LoRa. We are stable at ~300K images generated per day and ~1M images per month. Worth noting that since it started the AI Horde has generated close to 1 whole PETApixestep for free and ~70 million images!
On the model side, Deliberate solidifies itself furher as the best generalist model, while Stable Diffusion drops down to 3rd place as the Anime takes the 2nd spot. Our own special Kiwi, ResidenChief’s model seems to have been a massive success as well, coming out of nowhere to grab a solid 4th place. And this time the Furries are also in forth, capturing the 6th position! Pretty cool stuff!
Deliberate 31.6% (3127771)
Anything Diffusion 11.4% (1133058)
stable_diffusion 9.6% (951705)
ICBINP – I Can’t Believe It’s Not Photography 6.7% (662205)
Dreamshaper 5.6% (550335)
BB95 Furry Mix 3.7% (370853)
Hentai Diffusion 3.1% (303351)
Counterfeit 2.6% (254197)
ChilloutMix 1.9% (186525)
Pretty 2.5D 1.8% (178713)
Text Generation Stats
On the text side, not much has changed since May, with out generation staying similat at ~3M requests fulfilled and 377 Megatokens.
Likewise on the model top 10 Pygmalion 6B is still leading the pack with Mr. Seeker’s Erebus and Nerybus still in heavy use.
The main reason for being otherwise busy is that I’ve been furiously transferring my Reddit presence to my own self-hosted Lemmy instance, because Reddit is speed-running enshittification. I won’t bore you with the details but you can read up some of my work and see some of my development in the relevant blog tag.
However I do want to say that the instance I’ve fired up, the Divisions by zero has been more successful than I ever could have imagined! With ~10K users registered and thousands of subscribers to the communities, and some of the best admin team I could hope for.
Unfortunately there’s also been reddit drama which has been mightily distracting to me, but things are slowly settling down and I am putting my reddit days well behind me.
In relevance to the AI Horde however, we do have some cool communities you should subscribe to
I am likewise already planning more events and automation to more closely tie the AI Horde into the Lemmy instance for cool art stuff! Stay tuned and/or throw me your ideas!
R from our mod team has started running some cool prompt challenges in the discord server which you are all more than welcome to join! Winner gets a nice bundle of kudos, not to mention the amount you get by simply posting. It’s just fun all around, and the winners are featured in the Lemmy communities as well!
Tazlin has been hard at work improving the AI Horde Worker with bugfixes (not to menton the huge amounts of tech support given in discord). The AI Horde Worker as a result has become much much more stable which should have a good impact on your kudos-per-hour! Just a quick shout-out to an invaluable collaborator!
A Lot more workers
I don’t know how it’s happening but the AI Horde is nearing 100 dreamers! I am getting the fireworks ready for the first time we hit this threshold!
Funding and Support
My current infrastructure has been sufficiently stable since the last migration to a dedicated host which I think you have experienced with the low amount of downtimes and interruptions since.
This is my usual opportunity to point out that running, improving and maintaining the the AI Horde is basically a full-time job for me so please if you see the open commons potential of this service, consider subscribing as my patron where I reward people with monthly priority for their support.
While development slowed significantly in June, we’re still doing significant work for the open commons. I have just not had the mental capacity to build up hype as much as I used to, and to make it worse, the social media landscape is completely in the air at the moment.
I am really hoping more people can step up and help promote the AI Horde and what it represents as my workload is just through the roof and to be perfectly honest, I am at the limit of my “plate-spinning” capabilities.
Please talk about the AI Horde and the tools in its ecosystem. The more people who know about it, the more valuable it becomes for the benefit of everyone!
We have plenty of ways one can help and we shower with kudos everyone doing so. From people sharing images and helping others in the community, to developers bug-fixing my terrible code, to community managers on discord and admins on lemmy. If you want to help out, let us know!
Just one day ago I released my initial release of the Overseer and I was annoyed by the implementation almost immediately. The requirement to register on another Lemmy instance using a custom username and wait for manual approval (which could also lead to someone sniping that username, and forcing me to manually delete it), and THEN register your instance on the Overseer was just too clunky.
While a few people did register, I realized almost immediately it’s very likely this would never take off. So I started rethinking how I could streamline this process so that it would require far less steps.
My initial plan was to simply register all available instances on the fediverse by default, and allow admins to claim them later. That would require me somehow being able to contact those admins directly. This led me to try to figure out the best way to do that where I discovered I could theoretically talk to any fediverse instance directly, and not have to rely on the a specific lemmy instance with all its limitations.
So I spent many many hours figuring out how to do that. The documentation is frustratingly sparse and incomplete on this point, with my best guide was this blogpost, from half a decade ago for a different language than python. Fortunately this day and age, I had access to ChatGPT, so I could ask it to translate the code on that page to python and then with some trial and error and digging in the official documentation, I had a working DM system for mastodon!
Then I set out to do the same for lemmy which is where the big frustration was waiting, as the documentation is practically non-existent, the messages are cryptic and Rust and the way it’s coded was completely impenetrable to me. Lots and lots of digging in the code, trial and error and asking around in desperation, and I managed to figure out that the main blocking point was a “type” key in my activitypub instance, instead of a fault in my payload.
Unfortunately figuring out how to “speak activitypub” took me the better part of my day. But the good news is that once I had that down, the rest was just a matter of time. I just had to refactor everything. And hell, while I am doing that, why not change the name and domain as well
I have already updated my previous post with the new workflow, but the big difference is that it completely removes the need for an extra lemmy instance one has to manually register. Instead one just has to “claim” their instance and they will get a PM directly in their mailbox!
Likewise, you can guarantee for a different instance even if their admins haven’t claimed it first and they will get a helpful PM informing them about it. If the instance doesn’t exist yet, all you need to do is search for it in the whitelist and it will be automatically added, and then can be guaranteed. My next step is to automatically import all known instances by pulling them from the federation, which should avoid the manual step of searching for them first.
So let’s see how it goes. Now with the ability to talk to fediverse instances directly, it also opens up some really fascinating doors for automation! Stay tuned!