The playground schematic analogy for designing a fediverse service.

In recent days, the discussion around Lemmy has become a bit…spicy. There’s a few points of impact here. To list some examples:

This is not an exhaustive list. There’s significant grumbling about lemmy under the mastodon hashtag too.

On the flipside, there’s also been positive reinforcement towards lemmy and its dev, as can be seen by the admin of lemm.ee and many of the lemmy ecosystem in that thread.

You’ll notice all of these are frustrations about the (lack) of sufficient moderation in the tool-set of lemmy. This is typically coming from a lemmy admin’s perspective and the things that are very important to protect themselves and their communities.

In the discussions around these issues, a few common arguments have been made, which while sounding reasonable at face value, I think are the wrong thing to say to the situation at hand. The problem is somewhat that the one making these arguments feels like they’re being more than fair, while the ones receiving them feel dismissed or disrespected.

Before I go on, I want to make clear that I am writing this out of a place of support. I have been supporting lemmy years before the big lemmy exodus and after I made my own lemmy instance, I have created dozens of third party tools to help the ecosystem, because I want lemmy to succeed. That is to say, I’m not a random hater. I am just dismayed that the community is splintering like this, out of what seems to me, like primarily a communication issue.

So one of the analogies made in the sunaurus thread, likened lemmy development to designing a playground. It strikes me that this analogy is perfect, but not for the reason the one making it expects. Rather, it is perfect for exemplifying how someone coming from wholeheartedly supporting FOSS developers might still misjudge the situation and escalate a situation through miscommunication.

In this analogy, the commenter likened lemmy development like building a playground and external people asking for some completely unrelated feature, like a bird-watching tower, and expecting the developers to give it priority. The problem here is that the analogy is flawed. The developers are not building a playground for themselves. They’re building a playground schematic, which they expect people would and should deploy in many other locations.

Some people might indeed ask for “bird-watching observation posts” in such a schematic and it would be more than fair to ask them to build it themselves. but it is fallacious to liken any and all requests as something as out of scope as this. Some people might request safety features on the playground and those should absolutely be given more priority. We already know what can happen if you design a shitty playground, even if you give it for free!

To extend this analogy, the other lemmy admins, are not asking for luxury features. They are asking for improvements in the safety of the playground. Some people point out that metal slides become dangerous based on the weather. Some other point out that the playgrounds might be built in very unsafe areas, so a fence to protect the children from predators should be mandatory.

And here is the disconnect in communication happens. The overworked developer is already busy designing the next slide which can get them paid, or making sure things don’t break down as fast etc, and they perceive the safety requests as “luxury” items, someone should deal with themselves. However for the people who have to deal with upset parents and missing children, this dismissive attitude come out as downright malicious.

And thus you have a situation where both sides see the other are unreasonable. The devs see the people asking for the safety features as entitled, while the people who are suffering through the lack of those safety features perceive the developers as out-of-touch and dismissive.

Leave such a situation to fester long enough, and you start to get the exact situation that we have now. The Lemmy software starting to get a bad reputation in the areas concerned about most safety while forks and rebuilds are popping up.

All of this hurts the whole FOSS ecosystem by splintering development effort into multiple projects instead of collaborating on a single one. It turns our strength into a weakness!

This also brings me to another argument I see lemmy devs making somewhat too often. That they don’t have anything to gain from a larger community and just get more headaches. I always felt this was a patently absurd statement!

The lemmy devs are making more than 3K a month from lemmy. Enough that they are claiming they’re working on lemmy full-time. These funds don’t come because they’re running or developing a single forum for themselves. They come because they provide the “playground schematic”. If the community splinters into other software than lemmy, naturally the funds going towards lemmy development will likewise dry up.

This statement is completely upside down. The more people there are using and hosting lemmy, the more the lemmy developers benefit.

I would argue, the people lemmy developers should be listening to most are exactly the people hosting lemmy instances. These are the people putting incalculable hours into running and maintaining the servers and often paying out of pocket per month, for giving a service to others. Each admin is basically free value to the lemmy developers.

My position is in fact is that this scales down like layers. Lemmy Admins need to listen to instance admins most. Instance admins need to listen to community mods most. And finally community mods should listen to their users most. In this way you create bottom-up feedback mechanism, that doesn’t overwhelm any single person easily and everyone has a chance to be heard.

In AI Horde, I follow a similar approach. The segment of my community I listen to the most is in fact not the ones who are giving me money. It’s the ones who are providing their free time and idle compute for no other benefit than their own internal drive: The workers. They are effectively using their time for the benefit the whole ecosystem, which indirectly benefits me most. Without the workers, there would practically be no AI Horde, even if I am the only remaining. Likewise, without lemmy admins, lemmy as a software would be dead, even if the lemmy.ml people kept hosting forever.

So what can be done here. I think an important aspect here is to make sure we are talk in the same wavelength. Cutting down on miscommunication is very important to avoid exacerbating an already precarious situation.

Secondly, it’s totally understandable that lemmy devs don’t have enough time for everything. But likewise, there’s a ton of people who need safety features but can’t get them. As such, in my opinion priority should be put into making the frameworks that more easily allows people to extend lemmy functionality, even if it doesn’t match the lemmy developers visions, or immediate roadmap.

For this reason I strongly suggest that effort should be put into developing a plugin framework for lemmy. Ironic to suggest a completely different feature when the problem is too many things to do, but this specific feature is meant to empower the larger community to solve their own problems easier. So in the long term, it will massively reduce the incoming demands to the lemmy developers.

In the meantime, I do urge people to always consider that there’s always a human behind the monitor on the other side. A lot of time people don’t have the skills to effectively communicate what they mean, which is even worse in text form. We all need be a bit more charitable on what the other side is trying to say, especially when we’re trying to collaborate for a common FOSS project.

Post-Mortem: The massive lemmy.world -> lemmy.dbzer0.com federation delays.

A couple days ago, someone posted on /0 (the meta community for the Divisions by zero) that the incoming federation from lemmy.world (the largest lemmy instance by an order of magnitude) is malfunctioning. Alarmed, I started digging in, since a federation problem with lemmy.world will massively affect the content my community can see.

As always my first stop was the Lemmy General Chat on Matrix where I asked the lemmy.world admins if this appears to be something on their end. To their credit both their lead infra admin and the owner himself jumped in to assist me, changing their sync settings, adding custom DNS entries and so on. Nothing seemed to help.

But the problem is must still be somewhere in lemmy.world I thought. It’s the only instance where this is happening and they upgraded to 0.19.3 recently, so something must have broken. But wait, this didn’t start immediately after the upgrade. Someone pointed out this very useful federation status page, which kinda point that the problem is only on lemmy.world.

Not quite, other big instances like lemmy.ml and lemm.ee were not having any issues with federation with lemmy.world (even though 2 dozen others like lemmy.pt were), and they are as big if not bigger than lemmy.dbzer0.com. A problem originating from lemmy.world cannot be possibly affecting only some specific instances. To make matters worse, both me and lemmy.ml are using the same host (OVH), so I couldn’t even blame my hosting provider somehow.

So obviously the main culprit it somewhere in my backend, right? Well, maybe. Problem is, none of the components of my infrastructure were overloaded, everything sitting between 5-15% utilization. Nothing to even worry about.

OK, so first I need to make sure it’s not a network issue somehow specifically between me and lemmy.world specifically. I know OVH gave me a bum floating IP in the past and were completely useless at even understanding that their floating IP was faulty, so I had to stop using it. Maybe there’s some problem with my loadbalancers.

Still, I’m using haproxy, which is nothing if not fast and rock solid. So I didn’t really suspect the software. Rather, maybe it’s a network issue with the LB itself. So first thing I did is double the amount of Loadbalancers in play, by setting my DNS record to point to my secondary LB at the same time. This should lessen the amount of traffic hitting my LB and even take them at a completely different VM, and thus point if the problem is on the haproxy side. Sadly, this didn’t improve things at all.

OK so next step, I checked how long a request takes to return from the backend after haproxy sends it over. The results were not good.

I don’t blame you if you cannot read this, but what this basically says is that a request hitting a POST on my /inbox, took between 0.8 and 1.2 seconds. This is bad! This is supposed to be a tiny payload to tell you an event happened on another instance, it should be practically instant.

Even more weird, this is affecting all instances, not just lemmy.world. So this is clearly a problem on my end, but it also confused me. Why am I not having troubles with other instances? The answer came when I was informed that 0.19.3 added a brand new, special new federation queue.

You see, the old versions of lemmy used to send all federation actions over as soon as they received them. Fire and forget style. This naturally lead to federation events being dropped due to a myriad of issues, like network, downtimes, gremlins etc. So you would lose posts, comments and votes, and you would (probably) never realize.

The new queue added order to this madness, by making each instance send its requests serially. A request would be sent again and again until it succeeded. And the next one would only be sent if the previous one was done. This is great for instances not experiencing issues like mine. You see, at this point, I was processing 1 incoming federation request per second approximately, while lemmy.world was sending around 3. Even worse, I would occasionally timeout as well by exceeding 10 seconds to process, causing 2 more seconds or wait time.

Unlike lemmy.world, other federating instances to mine didn’t have nearly as much activity, so 1 per second was enough to keep up to sync with them. This explained why I seemingly was only affected by lemmy.world and nobody else. I was somewhat slow, but only slow enough to notice if the source had too much traffic.

OK, we know the “what”, now we needed to know the “why”.

At this point I’m starting to suspect something is going on my Database. So I have to start digging into stuff I’m really not that familiar. This is where the story gets quite frustrating, because there’s just not a lot of admins in the chat who know much about the DB stuff of lemmy internals. So I would ask a question, or provide logs, and then had to wait sometimes hours for a reply. Fortunately both sunaurus from lemm.ee and phiresky were around, who could review some of my queries.

Still, I had to know enough sql to craft and finetune those queries myself and how to enable things like pg_stat_activity etc.

Through trial and error we did discover that some insert/update queries were taking a bit too much time to do their thing, which could mean that we were I/O bound. Easy fix, disable synchronous_commit, sacrificing some safety for speed. Those slow queries went away, but the problem remained the same. WTF?!

There was nothing else clearly slow in the DB, so there was nothing more we could do there. So my next thought was, maybe it’s a networking issue between my loadbalancers and my backend. OK so I needed to remove that from the equation. I set up a haproxy directly on top of my backend which would allow me to go through the loopback interface and have 0 latency. For this I had to ask the lemmy.world admins to kindly add lemmy.dbzer0.com directly to their /etc/hosts file so they alone would hit my local haproxy.

No change whatsoever!

At this point I’m starting to lose my mind. It’s not networking between my LB and my backend, and it’s not the DB. It has to be the backend. But it’s not under any load and there’s no errors. Well, not quite. There’s some “INFO” logs which refer to lost connections, or unexpected errors, but nobody in the chat seems to worry about them.

Right, that must mean the problem is networking between my backend and my database, right? Unlike most lemmy instances, I keep my lemmy DB and my backend separated. Also, the DB has a limited amount of connections and lemmy backend itself limits itself to a small pool of connections. Maybe I run out of connections because of slow queries?

OK let’s increase that to a couple thousands and see what happens.

Nothing happens, that’s what happens. Same 1 per second requests.

As I’m spiraling more and more towards madness, and the chat is running out of suggestions, sunaurus suggests that he adds some extra debugging to lemmy and I will run that to try and figure out which DB query is losing time. Great idea. Problem is, I have to compile lemmy from scratch to do that. I’ve never done that before. Not only that, I barely know how to use docker in the first place!

Alright, nothing else I can do, got to bite that bullet. So I clone the lemmy backend and while waiting for sunaurus to come online, I start hacking at it to figure out how to make it compile a docker lemmy backend from scratch. I run into immediate crashes and despair. Fortunately nutomic (one of the core devs) walked by and told me the git commands to run to fix it, so I could proceed in cooking my very first lemmy container. Then nutomic helped me realize I don’t need to set up a whole online repo to transfer my docker container. The more you know…

Alright, so I cooked a container and plugged it onto a whole separate docker infra, which is only connected to the lemmy.world loadbalancer, so I can remove all other logs from anything but federation requests. So far so good.

Well, not quite, unfortunately I forgot that the “main” branch of lemmy is actually the development branch and has untested code in there. So when I was testing my custom docker deployment, I migrated my DB to whatever the experimental schema is on main. Whoops!

OK, nothing seemingly broke. Problem for a different day? No, just foreshadowing.

Finally sunaurus comes back online and gives me a debug fork. I eagerly compile and deploy it on prod and then send some logs to sunaurus. We were expecting we’d see 1 or 2 queries that were struggling, so maybe a bad lock situation somewhere. We did not expect we’d see ALL queries, including the most simple query such as lookup a language, take 100ms or more! That can’t be good!

Sunaurus connects the dots and asks the pertinent question: “Is your DB close to the Backend, geographically?”

Well, “Yes”, I reply, “I got them in the same datacenter”. “Can you ping?” he asks.

OK, I ping. 25ms. That’s good right? Well, in isolation, that’s great. When it’s not so great is when talking about backend-to-DB communication! This like 1000s km distance.

You see, typically a loadbalancer just makes one request to the backend and gets one reply, so a 25ms roundtrip is nothing. However a backend is talking to the DB a lot. In this instance, for every incoming federation action the backend does like 20 database calls, to verify and submit. Multiply each of these by 25+25 roundtrip and you got 1000ms extra before any actual processing on the DB!

But how did this happen? I’m convinced all my servers are in the same geographic area. So I go to my provider panel and check. Nope, all my server BUT the backend are in the same geographic area. My backend happens to be around 2000 km away. Whoops!

Turns out, when I was migrating my backend back in the day I run into performance issues, I failed to pay attention to that little geographic detail. Nevertheless It all worked perfectly well until this specific set of circumstances where the biggest lemmy instance upgraded to 0.19.3 which caused a serial federation, which my slow-ass connection couldn’t keep up. In the past, I would just get flooded by sync requests by lemmy.world as they came. I would be slow, but I’d process them eventually. Now, the problem became obvious.

Alright, it’s time to put up my sleeves and it’s migrate servers! Thank fuck I have everyone written in Ansible as code, so the migration was relatively painless (other than slapping Debian 12 around to let me do fucking docker-compose operations with python, goddamnit!)

A couple of hours later, I had migrated my backend to the same DC as the Database, and as expected, suddenly my ingestion rate for federation actions was in the order of 50ms, instead of 1000ms. This means I could ingest closer to 20 actions per sec from lemmy.world and it was getting just 3/s new from its userbase. Finally we started catching up!

All in all, this has been a fairly frustrating experience and I can’t imagine anyone who’s not doing IT Infrastructure as their day job being able to solve this. As helpful as the other lemmy admins were, they were relying a lot on me knowing my shit around Linux, networking, docker and postgresql at the same time. I had to do extended DB analysis, fork repositories, compile docker containers from scratch and deploy them ad-hoc etc. Someone who just wants to host a lemmy server would give up way earlier than this.

For me, a very stressing component was the lack of replies in the chat. I would sometimes write pages of debug logs, and there was no reply from anyone for 6 hours or more. It gave me the impression that nobody had any clue what to do to help me and I was on my own. In fact, if it wasn’t for sunaurus specifically, who had enough Infrastructure, Rust and DB chops to get an insight out of where it was all going wrong, I would probably still be out there, pulling my hair.

As someone hosting a service like this, especially when it has 12K people in it, this is very scary! While 2 lemmy core developers were in the chat, the help they provided was very limited overall and this session mostly relied on my own skills to troubleshoot.

This reinforced in my mind that as much as I like the idea of lemmy (or any of the other threadiverse SW), this is only something experts should try hosting. Sadly, this will lead to more centralization of the lemmy community to few big servers instead of many small ones, but given the nature of problems one can encounter and the lack of support to fix them if they’re not experts, I don’t see an option.

Fortunately this saga ended and we’re now fully up to sync with lemmy.world. Ended? Not quite. You see today I realized I couldn’t upload images on my instance anymore. Remember when I started the development instance of lemmy by mistake from main? Welp that broke them. So I had to also learn how to downgrade a lemmy instance as well. Fortunately sunaurus had my back on this as well!

To spare some people the pain, I’ve sent a PR to the lemmy docs to expand the documentation for building docker containers and doing troubleshooting. My pain is your gain.

This also gave me an insight about how the federation of lemmy will eventually break when a single server (say, lemmy.world) grows big enough to start overwhelming even servers who are not badly setup like mine was. I have some ideas to work around some of this so I plan to a suggestion on how to become more future proof, which would incidentally prevent the same issue which happened to me in the first place.

In the meantime, enjoy the Divisions by zero, which as a result of the migration should now feel massively faster as well!

Can we improve the Fediverse Allow-List Model?

Looks like someone really kicked the hornet’s nest recently on mastodon by announcing (not even deploying) a Mastodon-BlueSky bridge. Just take a look at the github comments here to get an idea of how this was received.

Plenty of people way more experienced than myself have weighted on this issue so I don’t feel the need to leave my two cents. However I wanted to talk about a very common counter-argument made towards those who do not want such bridges to exist. Namely, that Fediverse already provides the tools towards not having such a bridge be an issue: The allow-list model.

The idea being that if your ActivityPub server by default rejects all federation except towards trusted instances, then such bridges pose no problems whatsoever. The bridge (and any potential undercover APub scrappers) would not be able to get to your instance anyway.

Naturally, the counterargument is that this is way too limiting to one’s reach, and they shouldn’t be forced into isolation like this. Unfortunately the alternative here appears to try and scold others into submission, and this is unlikely to be long term solution. Eventually the Eternal September will come to the Fediverse. If you spent the past few years relying on peer pressure to enforce social norms, then the influx of people who do not share your values is going to make that tactic moot.

In fact, we can already see the pushback to the scolding tactics unfolding right now.

The solution then has to be a way to improve the way we handle such scenarios. Improve the tooling and our tactics so that such bridges and scrappers cannot be an issue.

A lot of the frustration I feel also comes down to the limited set of tools provided by Mastodon and other Fediverse services. A lot of the time, the improvement of tooling is stubbornly refused by the privileged core developers who don’t feel the need to support the needs of the marginalized communities. But that doesn’t mean the tooling couldn’t be expanded to be more flexible.

So let’s think about the Allow-List model for a moment. The biggest issue of an Allow-List is not necessarily that the origin server restricts themselves from the discussion. In fact they’re probably perfectly happy with that. The problem is that if this became the norm, it massively restricts the biggest strength of the Fediverse, which is for anyone to create and run their own server.

If I make a new server and most of everyone I want to interact with is in Allow-List mode, how do I even get in? We then have to start creating informal communication channels where one has to apply to join the allow-circle. Such processes have way too many drawbacks to list, such as naturally marginalizing Neurodivergent people with Rejection Sensitivity Dysphoria, balkanizing the Fediverse, empowering whisper networks and so on.

I want to instead suggest an alternative hybrid approach: The Feeler network. (provisional name)

The idea is thus: You have your well protected servers in Allow-List mode. These are the servers which require protection from constant harassment when their posts are spread publicly. These servers have a few “Feeler” instances they trust on their allow-list. Those servers in turn do not have an allow-mode turned on, but rely on blocklist like usual. Their users would be those privileged enough to be able to handle the occasional abuse or troll coming their way before blocking them.

So far so good. Nothing changes here. However what if those Feeler servers could also use the wider reach to see which instances are cool and announce that to their trusted servers? So a new instance appears in your federation. You, as a Feeler server, interact with them for a bit and nothing suspicious happens, and their users seem all to be ideologically aligned enough. You then add them into a public “endorsed list”. Now all the servers in your trust circle who are in allow-mode see this endorsement and automatically add them to their allow-lists. Bam! Problem solved. New servers have a way to be seen and eventually come into reach with Allow-List instances through a sort of organic probation period, and allow-listed servers can keep expanding their reach without private communications, and arduous application processes.

Now you might argue: “Hey Db0, yes my feelers can see my allow-list server posts, but if they boost them, now anyone can see them, and now they will be bridged to bluesky and I’m back in a bad spot!”

Yes this is possible, but also technically solvable. All we need to do is to make the Feeler servers only federate boosted posts from servers in allow-mode, to the servers that the ones in the allow-list already allow. So let’s say Server T1 and T2 are instances in allow-list mode which trust each other. Server F1 is a Feeler server trusted by T1 and T2. Server S1 is an external instance that is not blocked by F1, but not yet endorsed either. User in F1 boosts a post from T1. Normally a user in S1 would see that post by following that user. All we need to do is to change the software so that if F1 boosts a post from T1, the boost would only federate towards T2 and other instances in T1’s allow-list, instead of everyone. Sure this would require a bit more boost complexity, but it’s nothing impossible. Let’s call this “protected boost”.

Of course, this would require all Apub software to expose an “Endorsement” list for this to work. This is where the big difficulty comes from, as you now have to herd the cats that are the multitude of APub developers to add new functionality. Fortunately, this is where tools like the Fediseer can cover for the lack of development, or outright rejection by your software developer. The Fediseer already provides endorsement functionality along with a full REST API, so you can already implement this Feeler functionality by a few simple scripts!

The “protected boost” mode would require mastodon developers to do some work of course, as that relies in the software internals which cannot be easily hacked by server admins. But this too can potentially just be a patch to the software that only Feeler-admins would need to run.

The best part of this approach is that it doesn’t require any communication whatsoever. All it needs is for the “Feeler” admins to be actively curating their endorsements (either on the Fediseer, or locally if it’s ever added to the SW). Then all allow-list server has to do is choose which Feelers they trust and “subscribe” to their endorsement list for their own allow-list. And of course, they can synchronize or expand their allow-list further as they wish. This approach naturally makes the distributed nature of the Fediverse into a strength, instead of a weakness!

Now personally, I’m a big proponent of the “human touch” in social networks, so I feel that endorsement lists should be a manual mechanism. But if you want to take this to the next level, you could also easily set up a mechanism where newly discovered instances would automatically pass into your endorsement list after X weeks/months of interaction with your user without reports and X-amount of likes or whatever. Assuming admins on-point, this could make widely Feeler servers as a trusted gateway into a well protected space on the fedi, where bad actors would find it extraordinarily difficult to infiltrate, regardless of how many instances they spawn. And it this network would still keep increasing each reach constantly, without adding an extraordinary amount of load to its admins.

Barring the “protected boost” mode, this concept is already possible through the Fediseer. The scripts to do this work already exist as well. All it requires is for people to attempt to use it and see how it functions!

Do point out pitfalls you foresee in this approach and we can discuss how to potentially address them.

Rebuttals on the Fediseer

I noticed recently that a few instances are just issuing counter-censures on the Fediseer just to be able to reply to the censures they received from others. While I get the need to say your piece, I didn’t like this utilization. So I wanted to provide an official way for instances to reply to censures and hesitations they received.

So now we’ve deployed Rebuttals. A Rebuttal is a “reply” to a censure or hesitation from another instance. If you have received both, the same rebuttal applies. This is set up this way so that rebuttals are not tied to any specific censure/hesitation and therefore being deleted when those are. If someone deletes and re-adds their censure against your instance, the same rebuttal will re-appear.

As always, remember there’s no hate speech allowed on the Fediseer, so make sure your rebuttals stay cool.

Also keep in mind the Fediseer is not a forum. There won’t be multi-threaded discussions going on. The expectation is that you can use the Rebuttals to explain why a censure is bogus and whatnot. Not to maintain flamewars.

I know there’s plenty of beefs on the Fediverse. I’m hoping with this feature won’t become fuel for them, but rather a way for everyone to feel they have a say in a neutral ground. I hope this can serve as a de-escalation as well. Sometimes an instance might receive a hesitation or censure from someone they don’t dislike, due to a misunderstanding. This will allow them to try and counter that, without having to counter-censure.

Let me know what you think in the comments below, by replying to this post on mastodon, or in the lemmy community.

Btw, since I have you here, did you know that you can support the hosting and development of the Fediseer? Currently I’m paying the hosting costs out of pocket. It’s not a lot but it would be nice if the infrastructure costs were self-sustaining. So if you find value in this service, feel free to throw some money at the development team.

Fediseer multi-domain Endorsement and Censures

Fediseer has now been updated with 2 new features

Multi-domain endorsement lists

One can now request all endorsements from multiple domains:

This allows instances to quickly discover a “list of friends” based on other instances. Use cases for this might include scripts which auto-approve comments in moderation, or automatically update a fediverse instance’s whitelist based on common endorsements.

Censures

The current fediseer guarantees are meant to apply only to consideration of spam. As such we do not have a way to mark instance that many would consider terrible in all other ways except spam (e.g. pro-nazi instance, or an instance allowing loli content) as such.

To solve this a new feature has been added: Censures

An instance can now apply a censure to any other instance domain (whether it’s guaranteed already or not) for any reason. This extreme disapproval can come from any subjective reason, but like endorsements it doesn’t, by itself, have any mechanical impact.

In fact, because endorsements and censures are explicitly subjective, I have taken the decision to not display censure counts on instance details to avoid people using them for “objectively” rating instances which is not their intended purpose. This is because one’s instance could easily be censured by a lot of, say, fascist instances, but this should have no overall impact on how non-fascists percive that instance.

Instead, one can see combined censure lists from multiple instances, like one can see combined endorsements. You simply pass a comma-separated list of domains to the /api/v1/censures_given endpoint and you get a list of all the instances they have censured collectively.

This endpoint can then be consumed to make collective blocklists among instances that trust each other, without one having to manually discover and parse different instance federation blocklists all the time.

Likewise, by not being explicitly tied to a blocklist, the censure list could also be used to enforce a softer approach. For example by having an auto-moderator script which flags comments from censured instances for manual review, etc. This could allow an instance to retain a less-restrictive blocklist, but still allow for more rapid response to users coming from potentially problematic instances.

As always, the point of the fediseer is to allow a way to provide the information easily, rather than to dictate how to use it. I excited to see in which ways people will utilize the new abilities.

Reddit is a dead site running

Yesterday I read the excellent article by Cory Doctorow: Let the Platforms Burn and this particular anecdote

The thing is, network effects are a double-edged sword. People join a service to be with the people they care about. But when the people they care about start to leave, everyone rushes for the exits. Here’s danah boyd, describing the last days of Myspace:

If a central node in a network disappeared and went somewhere else (like from MySpace to Facebook), that person could pull some portion of their connections with them to a new site. However, if the accounts on the site that drew emotional intensity stopped doing so, people stopped engaging as much. Watching Friendster come undone, I started to think that the fading of emotionally sticky nodes was even more problematic than the disappearance of segments of the graph.

With MySpace, I was trying to identify the point where I thought the site was going to unravel. When I started seeing the disappearance of emotionally sticky nodes, I reached out to members of the MySpace team to share my concerns and they told me that their numbers looked fine. Active uniques were high, the amount of time people spent on the site was continuing to grow, and new accounts were being created at a rate faster than accounts were being closed. I shook my head; I didn’t think that was enough. A few months later, the site started to unravel.

This is exactly what is happening to Reddit currently. The most passionate contributors, the most tech-literate users, and the integrators who make all the free tools in the ecosystem around reddit which makes that service much more valuable have left and will never look back.

From the dashboards of u/spez however, things might looks great. Better even! As the drama around their decision making certainly caused a lot more posts and interactions, and the loss of the 3rd party apps drove at least a few users to the official applications.

But this is an illusion. Like MySpace before them, the metric might look good, but the soul of the site has been lost. It’s not easy to explain but since I’ve started using Lemmy full-time, I’ve seen the improvement in engagement and quality in real time. half a month ago, posts could barely pass 2 digits, now they regularly break 3 and sometimes 4 digits. And the quality of the discussions is a pleasure to go through.

I said it before, but reddit was never a particularly good site. Their saving grace was the openness of their API and their hands-off approach to communities. The two things they just destroyed. It’s those 3rd party tools and communities that made reddit like it is. As as the ecosystem around reddit sputters and dies, the one around the Threadiverse is progressing in an astonishing rate.

Not only are the integrators coming from reddit aware what kind of bots and tools are going to be very useful, but a lot of those tools are shut off from reddit and switched to the lemmy API instead, explicitly cannibalizing the quality of the reddit experience. And due to the completely open API of the Threadiverse, those tools now get access to unparalleled access and power.

Sure if you visit reddit currently, you’ll see people talking and voting, but as someone who’s been there from the start, the quality has fallen off a hill and is reaching terminal velocity. But it feels like one’s still flying!

Not just the quality of the posts where only the most superficial meme stuff can rise to the top, not just the quality of the discussion, but even mere vibe of the discussions is just lost.

There’s now significant bitterness and hostility, especially as the mods who were responsible for maintaining the quality, have gone or are being hands off or just don’t have the tools needed to keep up. I’ve heard from multiple people who are leaving even while they were not originally planning to, because the people left over in reddit are just so toxic.

This is a very vicious cycle which will accelerate the demise of that site even further.

A house fire can go from a spark to a raging inferno in less than a minute. The flames consuming reddit are just now climbing up the curtains and it still appears manageable, but it’s already too late. Reddit has reached terminal enshittification and the only thing left for it to do, is die.

Reddit worked despite reddit.

I visit Reddit all the time. And I visited Digg before that. In fact I was hooked to this mode of operation since Digg. Suffice to say, something about link aggregation tickles my ADHD brain just right.

However with the recent blackout of a big part of reddit, I decided to start my own Lemmy instance and use Lemmy primarily instead through it. Since I’ve started this experiment, I feel any urge to visit Reddit for my “fix” less and less. I have some thoughts about that.

In Twitter Vs Mastodon AKA “micro-blogging”, The value was in the specific people one followed which made it way harder to switch services because one was help back by other people. I.e. the people kept each other locked-in. Similar to how Facebook keeps everyone locked-in their walled garden because it’s the only social media their parents and grandparents managed to learn to use.

In Reddit however, the value is all about the specific forums, or “subreddits”, in lingo. The specific people one was talking to, never really mattered. What was important was the overall engagement general sense of shared-interest. This has always been the core strength of Reddit, and its early pioneers like Aaron Schwartz understood that.

This is why the minimalist reddit of old, managed to dethrone Digg when the latter decided that its core principles wasn’t user-curated content, but linkspam. The people who migrated into Reddit made what it is today, by creating and nurturing their communities over years.

Any beneficial actions by reddit itself have been either following what the community was already doing (such as adding CSS options or on-boarding the automoderator bot), or forced by bad optics, such as when they were forced to finally ban /r/coontown, /r/fatpeoplehate, /r/jailbait (which their current CEO moderated btw) etc.

The community and the people who run the subreddits have always had to make the minimalist options allowed to them work. They had to develop their own tools and enhancements, such as RES, and Moderator Toolbox, while Reddit couldn’t even provide much requested functionality to counter the known abuses of cross-subreddit raiding.

Instead, Reddit focused on adding useless features nobody asked for like NFT. On the usability, the new look was their push to take the site more towards generic social media network, with friends, follows, awards and avatars, and instead of focusing on their core product: Link aggregation and discussions.

In fact, any action they took, was laser focused on social-media lock-in and extracting wealth and adding features which people didn’t care for, which is why most third party apps simply ignored all that stuff.

Through all this, their valuable communities kept fighting against reddit management’s pushes so that they could do what was right, even if some lost that fight, like /r/AMA which became but a shadow of its former self when the cowardly owners fired their low-level employee leading its success, and scapegoated their then female CEO for it.

Eventually though something had to give, and reddit seems to have realized that their users are too stubborn to simply accept the new paradigm they designed for them where they watch more ads, buy more reddit gold and get addicted to NFTs. And 3rd party apps enabled users to use the valuable part of reddit and skip the enshittification all too easily.

So they had to go. And here we are.

Unfortunately for reddit, since the core value of reddit has always been the links, and the discussions around said links, instead of specific people and a social network around them, it is stunningly easy to jump ship. It doesn’t take a lot to keep a community going on Lemmy instead of Reddit. All it needs is a handful of dedicated people to keep finding and posting links, and the discussions and memes will easily follow.

I don’t need to know that I know the links are coming from Gallowboob, in fact, I never cared who posted the links or started the discussions. Reddit has had the “friends” feature for close to a decade now, and I have “friended” less than a handful of people. There’s literally nothing holding me and most people back except our existing routines.

There is of course still a lot of momentum in reddit communities, and a lot of mods who really don’t want to lose their status. Nevertheless, I’m finding I’m not actually missing much by staying exclusively on lemmy atm and I see a lot of people are realizing the same thing increasingly fast. The finality of the loss of major apps like Apollo, RIF and Sync has already been the final nail for a lot of people.

This exodus might already be unstoppable unless reddit completely capitulates and goes back on their API plans. But I don’t hold my breath on this.

Feel free to come and hang out at the Divisions by zero lemmy instance btw. We’ll do fun things!

What About Paid Services on Top of the AI Horde?

While the AI Horde will always be free for all, anyone can develop frontend for it and ask their users to pay for its use. This blogpost explains why this is OK so long as they give back as much as they take and how this is enforced.

Recently, a paid service built on top of the AI Horde was announced on reddit’s /r/stablediffusion and a big discussion opened on the ethics of charging people for money for access to the free compute provided by the AI Horde. I’ve talked about this in my discord with some users who were concerned, but I foresee it’s a subject that will keep coming up. So It’s a good time to clarify my position on this subject, “officially” as they say.

When I initially envisioned the AI Horde, this sort of question was foremost on my mind. “How to I prevent abuse of a crowdsourced system with unrestricted access for everyone?” My answer to this question was the Kudos system, which is baked-in on every usage of the AI horde.

Due to the “protection” of the kudos system, we can offer the AI Horde service as an open API for everyone, for any purpose. Knowing that whatever they do, they’ll have to either support the health of the service, or go back to the end of the queue. This allows us to not worry about who or how they’re using the service, because the kudos requirements are inescapable. This bears reiterating:

WE DO NOT CONTROL HOW OTHERS INTEGRATE WITH THE AI HORDE

Because we cannot control people, I am cognizant that people might try to charge money for their services based on the horde (which again, we cannot stop!) or even other technologies we wholly reject (like blockchain). But It doesn’t matter how someone uses the AI Horde; so long as they remain within the limits of the Kudos system, they will have to provide more to the AI Horde than they take out, which balances things out for everyone.

This is the practice of all open paradigms out there. They all rely on volunteer effort but allow people to find business models which can make them money, so long as they respect the open paradigm.

For example, the AI Horde is modeled after BitTorrent. It would be just as absurd to claim that the BitTorrent protocol itself is flawed because a Torrent client is charging money to their users, adding malware or integrating blockchain. Those users still have to play by the BitTorrent protocol and by whichever tracker rules they’re based on.

Likewise, even the most hardcore copyleft licenses like GPL explicitly allow commercial use of the software. Because people need to eat! It would likewise be absurd to say that the Linux kernel is unethical, because companies are making money selling stuff built on top of the Linux Kernel!

So knowing that open systems cannot control how other use them, and that the actions of integrators do not represent flaws of open system itself, we instead ask people to act in good faith. We request people to give back to he AI Horde as much, or more than they take. This means that everyone benefits. We likewise block registrations outside of the AI Horde and inform anyone registering that they can always use the AI Horde for free. This ensures that the owner of each service competes with every other free AI Horde UI out there. If their users still want to give them money after that, then they are obviously bringing something valuable to the table for those users. And again, that is OK with us, so long as they give back to the AI Horde according to their usage.

Finally, whatever one does, remember, they cannot escape the kudos system. A super popular front-end to the AI Horde which does not have at least a net zero consumption, will quickly find itself with such high queue times that will drive everyone else off their service.

The AI Horde is absolutely built to combat corporate influences and enshittification, however it is still an open service, and therefore it cannot control who uses it, without sacrificing that openness it is built on, or adding moderation overheads so massive that it would shutter the service.

Does that mean that everything goes? No of course not! As with the anti-CSAM filter, there’s a few rules that are of existential importance to the health of the AI Horde. For example another one is how one treats kudos themselves: I routinely remind people to not consider them a currency and to not assign any monetary value to them. The reason being that the exchange of kudos for money would introduce such immense perverse incentives into the equation, that it would cause the AI Horde development and moderators to switch full-time to countering scams and exploits instead of trying to improve the service. This is such a thick red line that I’m prepared to go to extremes to enforce it, even up to disabling kudos transfers altogether!

Fortunately until now people are following these directives, but what if tomorrow a service appeared whose business model relied on selling kudos they generated to their users, or which allowed people to bypass the anti-CSAM filter somehow? Well that would force me to take active means to counter such a service explicitly, which would easily escalate into an endless cat&mouse game at the detriment of the service. But it would be a necessary course of action. But the existence of a generic paid service however, outside of the violation of those rules for the AI Horde, does not necessitate it, precisely because it’s not an existential concern which would warrant the massive amount of resources that would have to be assigned to counter it.

All that said, I know people are still going to oppose the mere existence of integrations which found a way to make money using the AI Horde as a backend, even if those give back more than they take. Even if they help pay for the development and infrastructure of the AI Horde for the benefit of all. That is OK. Everyone should follow their conscience and values. I have even provided tools and controls for Workers to limit their exposure to practices they do not support, but even if those are not enough, then it is OK to not be part of the AI Horde.

This is also a reason why the AI Horde is Free/Libre Software. If someone else has a different ethical system on how crowdsourced compute resources like these should be handled, they are always welcome to host their own version of the AI Horde, in the same sense that anyone can host their own BitTorrent tracker, with any rules they want! I do honestly believe the current approach of the AI Horde, with unrestricted access is the way to go to democratize AI, but maybe I’m just wrong. It remains to be seen.

However, I do want to ask that people to do not share FUD about who we are affiliated with and what practices we support. The exact stance that we have, is what I have explained above.

At the end of the day, thousands of people are getting free Generative AI output currently and we do not plan to stop this access, ever. No matter who, or how they integrates into us. The AI Horde will always have a way to use it for free without restrictions!

Fantasy.ai is how the enshittification of Stable Diffusion begins

Fantasy.ai has gotten into hot water since its inception, which for a company which is based on the Open Source community, is quite impressive feat on its own.

For those who don’t know, basically fantasy.ai goes to various popular model creators and tempts them with promises of monetary reward them for their creative work, if only they agree to sign over some exclusive rights for commercial use of their model, as well as some other priority terms.

It’s a downright Faustian deal and I would argue that this is how a technology that begun using the Open Source ideals to be able to counteract the immense weight of players like OpenAI and Midjourney, begins to be enclosed.

Cory Doctorow penned an excellent new word for the process in which web2.0 companies die – Enshittification.

  • First they offer an amazing value for the user, which attracts a lot of them and makes the service more valuable to other businesses, like integrating services and advertising agencies.
  • Then they start making the service worse for their user-base, but more valuable for their business partners, such as via increasing the amount of adverts for the same price, selling user data and metrics, pushing paid content to more users who don’t want to see it, and so on.
  • Finally once their business partners are also sufficiently reliant on them for income, they tighten their grip and start extracting all the value for themselves and their shareholders, such as by requiring extravagant payment from businesses to let people see the posts they want to see, or the products they want to buy.
  • Finally, eventually, inexorably, the service experience has become so shitty, so miserable, that it breaches the Trust Thermocline and something disruptive (or sometimes, something simple) triggers a mass exodus of their user base.
  • Then the service dies, or becomes a zombie, filled with more and more desperate advertisers and an ever increasing flood of spam as the dying service keeps rewarding executives with MBAs rather than their IT personnel.

Because Stable Diffusion is built as open source, we are seeing an explosion of services offering services based on it, crop up practically daily. A lot of those services are trying to discover how to stand out compared to others, so we have a unique opportunity to see how the enshittification can progress in the Open Source Generative AI ecosystem.

We have services at the first stage, like CivitAI which offer an amazing service to their user-base, by tying social media to Stable Diffusion models and fine-tunes, and allowing easy access to share your work. They have not yet figured out their business plan, which is why until now, their service appears completely customer focused.

We have services, like Mage.space which started completely free and uncensored for all and as a result quickly gathered a dedicated following of users without access to GPUs who used them for free AI generations. They are progressing to the second stage of enshittification, by locking NSFW generations behind a paywall, serving adverts and now also making themselves more valuable to model creators as soon as they smelled blood in the water.

We do not have yet Stable Diffusion services at the late stage of enshittification as the environment is still way too fresh.

Fascinatingly, the main mistake of Fantasy.ai is not their speed run through the enshittification process, but rather attempting to bypass the first step. Unfortunately, fantasy.ai entered late in the Generative AI game, as its creator is an NFT-bro who wasn’t smart enough to pivot as early as the Mage.space NFT-bro. So to make up the time, they are flexing their economic muscles, trying to make their service better for their business partners (including the model creators) and choking their business rivals in the process. Smart plan, if only they hadn’t skipped the first step, which is making themselves popular by attracting loyal users.

So now the same user-base which is loyal to other services has turned against fantasy.ai, and a massive flood of negative PR is being directed towards them at every opportunity. The lack of loyalty to fantasy.ai through an amazing customer service is what allowed the community to more clearly see the enshittification signs and turn against them from the start. Maybe fantasy.ai has enough economic muscle to push through the tsunami of bad PR and manage to pull off step 2 before step 1, but I highly doubt it.

But it’s also interesting to see so many model creators being so easily sucked-in without realizing what exactly they’re signing up for. The money upfront for an aspiring creator might be good (or not, 150$ is way lower than I expected), but if fantasy.ai succeeds in dominating the market, eventually that deal will turn to ball and chain, and the same creators who made fantasy.ai so valuable to the user-base, will now find themselves having to do things like bribe fantasy.ai to simply show their models to the same users who already declared they wish to see them.

It’s a trap and it’s surprising and a bit disheartening to see so many creators sleepwalking into it, when we have ample history to show us this is exactly what will happen. As it has happened in every other instance in the history of the web!