The playground schematic analogy for designing a fediverse service.

In recent days, the discussion around Lemmy has become a bit…spicy. There’s a few points of impact here. To list some examples:

This is not an exhaustive list. There’s significant grumbling about lemmy under the mastodon hashtag too.

On the flipside, there’s also been positive reinforcement towards lemmy and its dev, as can be seen by the admin of lemm.ee and many of the lemmy ecosystem in that thread.

You’ll notice all of these are frustrations about the (lack) of sufficient moderation in the tool-set of lemmy. This is typically coming from a lemmy admin’s perspective and the things that are very important to protect themselves and their communities.

In the discussions around these issues, a few common arguments have been made, which while sounding reasonable at face value, I think are the wrong thing to say to the situation at hand. The problem is somewhat that the one making these arguments feels like they’re being more than fair, while the ones receiving them feel dismissed or disrespected.

Before I go on, I want to make clear that I am writing this out of a place of support. I have been supporting lemmy years before the big lemmy exodus and after I made my own lemmy instance, I have created dozens of third party tools to help the ecosystem, because I want lemmy to succeed. That is to say, I’m not a random hater. I am just dismayed that the community is splintering like this, out of what seems to me, like primarily a communication issue.

So one of the analogies made in the sunaurus thread, likened lemmy development to designing a playground. It strikes me that this analogy is perfect, but not for the reason the one making it expects. Rather, it is perfect for exemplifying how someone coming from wholeheartedly supporting FOSS developers might still misjudge the situation and escalate a situation through miscommunication.

In this analogy, the commenter likened lemmy development like building a playground and external people asking for some completely unrelated feature, like a bird-watching tower, and expecting the developers to give it priority. The problem here is that the analogy is flawed. The developers are not building a playground for themselves. They’re building a playground schematic, which they expect people would and should deploy in many other locations.

Some people might indeed ask for “bird-watching observation posts” in such a schematic and it would be more than fair to ask them to build it themselves. but it is fallacious to liken any and all requests as something as out of scope as this. Some people might request safety features on the playground and those should absolutely be given more priority. We already know what can happen if you design a shitty playground, even if you give it for free!

To extend this analogy, the other lemmy admins, are not asking for luxury features. They are asking for improvements in the safety of the playground. Some people point out that metal slides become dangerous based on the weather. Some other point out that the playgrounds might be built in very unsafe areas, so a fence to protect the children from predators should be mandatory.

And here is the disconnect in communication happens. The overworked developer is already busy designing the next slide which can get them paid, or making sure things don’t break down as fast etc, and they perceive the safety requests as “luxury” items, someone should deal with themselves. However for the people who have to deal with upset parents and missing children, this dismissive attitude come out as downright malicious.

And thus you have a situation where both sides see the other are unreasonable. The devs see the people asking for the safety features as entitled, while the people who are suffering through the lack of those safety features perceive the developers as out-of-touch and dismissive.

Leave such a situation to fester long enough, and you start to get the exact situation that we have now. The Lemmy software starting to get a bad reputation in the areas concerned about most safety while forks and rebuilds are popping up.

All of this hurts the whole FOSS ecosystem by splintering development effort into multiple projects instead of collaborating on a single one. It turns our strength into a weakness!

This also brings me to another argument I see lemmy devs making somewhat too often. That they don’t have anything to gain from a larger community and just get more headaches. I always felt this was a patently absurd statement!

The lemmy devs are making more than 3K a month from lemmy. Enough that they are claiming they’re working on lemmy full-time. These funds don’t come because they’re running or developing a single forum for themselves. They come because they provide the “playground schematic”. If the community splinters into other software than lemmy, naturally the funds going towards lemmy development will likewise dry up.

This statement is completely upside down. The more people there are using and hosting lemmy, the more the lemmy developers benefit.

I would argue, the people lemmy developers should be listening to most are exactly the people hosting lemmy instances. These are the people putting incalculable hours into running and maintaining the servers and often paying out of pocket per month, for giving a service to others. Each admin is basically free value to the lemmy developers.

My position is in fact is that this scales down like layers. Lemmy Admins need to listen to instance admins most. Instance admins need to listen to community mods most. And finally community mods should listen to their users most. In this way you create bottom-up feedback mechanism, that doesn’t overwhelm any single person easily and everyone has a chance to be heard.

In AI Horde, I follow a similar approach. The segment of my community I listen to the most is in fact not the ones who are giving me money. It’s the ones who are providing their free time and idle compute for no other benefit than their own internal drive: The workers. They are effectively using their time for the benefit the whole ecosystem, which indirectly benefits me most. Without the workers, there would practically be no AI Horde, even if I am the only remaining. Likewise, without lemmy admins, lemmy as a software would be dead, even if the lemmy.ml people kept hosting forever.

So what can be done here. I think an important aspect here is to make sure we are talk in the same wavelength. Cutting down on miscommunication is very important to avoid exacerbating an already precarious situation.

Secondly, it’s totally understandable that lemmy devs don’t have enough time for everything. But likewise, there’s a ton of people who need safety features but can’t get them. As such, in my opinion priority should be put into making the frameworks that more easily allows people to extend lemmy functionality, even if it doesn’t match the lemmy developers visions, or immediate roadmap.

For this reason I strongly suggest that effort should be put into developing a plugin framework for lemmy. Ironic to suggest a completely different feature when the problem is too many things to do, but this specific feature is meant to empower the larger community to solve their own problems easier. So in the long term, it will massively reduce the incoming demands to the lemmy developers.

In the meantime, I do urge people to always consider that there’s always a human behind the monitor on the other side. A lot of time people don’t have the skills to effectively communicate what they mean, which is even worse in text form. We all need be a bit more charitable on what the other side is trying to say, especially when we’re trying to collaborate for a common FOSS project.

Post-Mortem: The massive lemmy.world -> lemmy.dbzer0.com federation delays.

A couple days ago, someone posted on /0 (the meta community for the Divisions by zero) that the incoming federation from lemmy.world (the largest lemmy instance by an order of magnitude) is malfunctioning. Alarmed, I started digging in, since a federation problem with lemmy.world will massively affect the content my community can see.

As always my first stop was the Lemmy General Chat on Matrix where I asked the lemmy.world admins if this appears to be something on their end. To their credit both their lead infra admin and the owner himself jumped in to assist me, changing their sync settings, adding custom DNS entries and so on. Nothing seemed to help.

But the problem is must still be somewhere in lemmy.world I thought. It’s the only instance where this is happening and they upgraded to 0.19.3 recently, so something must have broken. But wait, this didn’t start immediately after the upgrade. Someone pointed out this very useful federation status page, which kinda point that the problem is only on lemmy.world.

Not quite, other big instances like lemmy.ml and lemm.ee were not having any issues with federation with lemmy.world (even though 2 dozen others like lemmy.pt were), and they are as big if not bigger than lemmy.dbzer0.com. A problem originating from lemmy.world cannot be possibly affecting only some specific instances. To make matters worse, both me and lemmy.ml are using the same host (OVH), so I couldn’t even blame my hosting provider somehow.

So obviously the main culprit it somewhere in my backend, right? Well, maybe. Problem is, none of the components of my infrastructure were overloaded, everything sitting between 5-15% utilization. Nothing to even worry about.

OK, so first I need to make sure it’s not a network issue somehow specifically between me and lemmy.world specifically. I know OVH gave me a bum floating IP in the past and were completely useless at even understanding that their floating IP was faulty, so I had to stop using it. Maybe there’s some problem with my loadbalancers.

Still, I’m using haproxy, which is nothing if not fast and rock solid. So I didn’t really suspect the software. Rather, maybe it’s a network issue with the LB itself. So first thing I did is double the amount of Loadbalancers in play, by setting my DNS record to point to my secondary LB at the same time. This should lessen the amount of traffic hitting my LB and even take them at a completely different VM, and thus point if the problem is on the haproxy side. Sadly, this didn’t improve things at all.

OK so next step, I checked how long a request takes to return from the backend after haproxy sends it over. The results were not good.

I don’t blame you if you cannot read this, but what this basically says is that a request hitting a POST on my /inbox, took between 0.8 and 1.2 seconds. This is bad! This is supposed to be a tiny payload to tell you an event happened on another instance, it should be practically instant.

Even more weird, this is affecting all instances, not just lemmy.world. So this is clearly a problem on my end, but it also confused me. Why am I not having troubles with other instances? The answer came when I was informed that 0.19.3 added a brand new, special new federation queue.

You see, the old versions of lemmy used to send all federation actions over as soon as they received them. Fire and forget style. This naturally lead to federation events being dropped due to a myriad of issues, like network, downtimes, gremlins etc. So you would lose posts, comments and votes, and you would (probably) never realize.

The new queue added order to this madness, by making each instance send its requests serially. A request would be sent again and again until it succeeded. And the next one would only be sent if the previous one was done. This is great for instances not experiencing issues like mine. You see, at this point, I was processing 1 incoming federation request per second approximately, while lemmy.world was sending around 3. Even worse, I would occasionally timeout as well by exceeding 10 seconds to process, causing 2 more seconds or wait time.

Unlike lemmy.world, other federating instances to mine didn’t have nearly as much activity, so 1 per second was enough to keep up to sync with them. This explained why I seemingly was only affected by lemmy.world and nobody else. I was somewhat slow, but only slow enough to notice if the source had too much traffic.

OK, we know the “what”, now we needed to know the “why”.

At this point I’m starting to suspect something is going on my Database. So I have to start digging into stuff I’m really not that familiar. This is where the story gets quite frustrating, because there’s just not a lot of admins in the chat who know much about the DB stuff of lemmy internals. So I would ask a question, or provide logs, and then had to wait sometimes hours for a reply. Fortunately both sunaurus from lemm.ee and phiresky were around, who could review some of my queries.

Still, I had to know enough sql to craft and finetune those queries myself and how to enable things like pg_stat_activity etc.

Through trial and error we did discover that some insert/update queries were taking a bit too much time to do their thing, which could mean that we were I/O bound. Easy fix, disable synchronous_commit, sacrificing some safety for speed. Those slow queries went away, but the problem remained the same. WTF?!

There was nothing else clearly slow in the DB, so there was nothing more we could do there. So my next thought was, maybe it’s a networking issue between my loadbalancers and my backend. OK so I needed to remove that from the equation. I set up a haproxy directly on top of my backend which would allow me to go through the loopback interface and have 0 latency. For this I had to ask the lemmy.world admins to kindly add lemmy.dbzer0.com directly to their /etc/hosts file so they alone would hit my local haproxy.

No change whatsoever!

At this point I’m starting to lose my mind. It’s not networking between my LB and my backend, and it’s not the DB. It has to be the backend. But it’s not under any load and there’s no errors. Well, not quite. There’s some “INFO” logs which refer to lost connections, or unexpected errors, but nobody in the chat seems to worry about them.

Right, that must mean the problem is networking between my backend and my database, right? Unlike most lemmy instances, I keep my lemmy DB and my backend separated. Also, the DB has a limited amount of connections and lemmy backend itself limits itself to a small pool of connections. Maybe I run out of connections because of slow queries?

OK let’s increase that to a couple thousands and see what happens.

Nothing happens, that’s what happens. Same 1 per second requests.

As I’m spiraling more and more towards madness, and the chat is running out of suggestions, sunaurus suggests that he adds some extra debugging to lemmy and I will run that to try and figure out which DB query is losing time. Great idea. Problem is, I have to compile lemmy from scratch to do that. I’ve never done that before. Not only that, I barely know how to use docker in the first place!

Alright, nothing else I can do, got to bite that bullet. So I clone the lemmy backend and while waiting for sunaurus to come online, I start hacking at it to figure out how to make it compile a docker lemmy backend from scratch. I run into immediate crashes and despair. Fortunately nutomic (one of the core devs) walked by and told me the git commands to run to fix it, so I could proceed in cooking my very first lemmy container. Then nutomic helped me realize I don’t need to set up a whole online repo to transfer my docker container. The more you know…

Alright, so I cooked a container and plugged it onto a whole separate docker infra, which is only connected to the lemmy.world loadbalancer, so I can remove all other logs from anything but federation requests. So far so good.

Well, not quite, unfortunately I forgot that the “main” branch of lemmy is actually the development branch and has untested code in there. So when I was testing my custom docker deployment, I migrated my DB to whatever the experimental schema is on main. Whoops!

OK, nothing seemingly broke. Problem for a different day? No, just foreshadowing.

Finally sunaurus comes back online and gives me a debug fork. I eagerly compile and deploy it on prod and then send some logs to sunaurus. We were expecting we’d see 1 or 2 queries that were struggling, so maybe a bad lock situation somewhere. We did not expect we’d see ALL queries, including the most simple query such as lookup a language, take 100ms or more! That can’t be good!

Sunaurus connects the dots and asks the pertinent question: “Is your DB close to the Backend, geographically?”

Well, “Yes”, I reply, “I got them in the same datacenter”. “Can you ping?” he asks.

OK, I ping. 25ms. That’s good right? Well, in isolation, that’s great. When it’s not so great is when talking about backend-to-DB communication! This like 1000s km distance.

You see, typically a loadbalancer just makes one request to the backend and gets one reply, so a 25ms roundtrip is nothing. However a backend is talking to the DB a lot. In this instance, for every incoming federation action the backend does like 20 database calls, to verify and submit. Multiply each of these by 25+25 roundtrip and you got 1000ms extra before any actual processing on the DB!

But how did this happen? I’m convinced all my servers are in the same geographic area. So I go to my provider panel and check. Nope, all my server BUT the backend are in the same geographic area. My backend happens to be around 2000 km away. Whoops!

Turns out, when I was migrating my backend back in the day I run into performance issues, I failed to pay attention to that little geographic detail. Nevertheless It all worked perfectly well until this specific set of circumstances where the biggest lemmy instance upgraded to 0.19.3 which caused a serial federation, which my slow-ass connection couldn’t keep up. In the past, I would just get flooded by sync requests by lemmy.world as they came. I would be slow, but I’d process them eventually. Now, the problem became obvious.

Alright, it’s time to put up my sleeves and it’s migrate servers! Thank fuck I have everyone written in Ansible as code, so the migration was relatively painless (other than slapping Debian 12 around to let me do fucking docker-compose operations with python, goddamnit!)

A couple of hours later, I had migrated my backend to the same DC as the Database, and as expected, suddenly my ingestion rate for federation actions was in the order of 50ms, instead of 1000ms. This means I could ingest closer to 20 actions per sec from lemmy.world and it was getting just 3/s new from its userbase. Finally we started catching up!

All in all, this has been a fairly frustrating experience and I can’t imagine anyone who’s not doing IT Infrastructure as their day job being able to solve this. As helpful as the other lemmy admins were, they were relying a lot on me knowing my shit around Linux, networking, docker and postgresql at the same time. I had to do extended DB analysis, fork repositories, compile docker containers from scratch and deploy them ad-hoc etc. Someone who just wants to host a lemmy server would give up way earlier than this.

For me, a very stressing component was the lack of replies in the chat. I would sometimes write pages of debug logs, and there was no reply from anyone for 6 hours or more. It gave me the impression that nobody had any clue what to do to help me and I was on my own. In fact, if it wasn’t for sunaurus specifically, who had enough Infrastructure, Rust and DB chops to get an insight out of where it was all going wrong, I would probably still be out there, pulling my hair.

As someone hosting a service like this, especially when it has 12K people in it, this is very scary! While 2 lemmy core developers were in the chat, the help they provided was very limited overall and this session mostly relied on my own skills to troubleshoot.

This reinforced in my mind that as much as I like the idea of lemmy (or any of the other threadiverse SW), this is only something experts should try hosting. Sadly, this will lead to more centralization of the lemmy community to few big servers instead of many small ones, but given the nature of problems one can encounter and the lack of support to fix them if they’re not experts, I don’t see an option.

Fortunately this saga ended and we’re now fully up to sync with lemmy.world. Ended? Not quite. You see today I realized I couldn’t upload images on my instance anymore. Remember when I started the development instance of lemmy by mistake from main? Welp that broke them. So I had to also learn how to downgrade a lemmy instance as well. Fortunately sunaurus had my back on this as well!

To spare some people the pain, I’ve sent a PR to the lemmy docs to expand the documentation for building docker containers and doing troubleshooting. My pain is your gain.

This also gave me an insight about how the federation of lemmy will eventually break when a single server (say, lemmy.world) grows big enough to start overwhelming even servers who are not badly setup like mine was. I have some ideas to work around some of this so I plan to a suggestion on how to become more future proof, which would incidentally prevent the same issue which happened to me in the first place.

In the meantime, enjoy the Divisions by zero, which as a result of the migration should now feel massively faster as well!

Reddit is a dead site running

Yesterday I read the excellent article by Cory Doctorow: Let the Platforms Burn and this particular anecdote

The thing is, network effects are a double-edged sword. People join a service to be with the people they care about. But when the people they care about start to leave, everyone rushes for the exits. Here’s danah boyd, describing the last days of Myspace:

If a central node in a network disappeared and went somewhere else (like from MySpace to Facebook), that person could pull some portion of their connections with them to a new site. However, if the accounts on the site that drew emotional intensity stopped doing so, people stopped engaging as much. Watching Friendster come undone, I started to think that the fading of emotionally sticky nodes was even more problematic than the disappearance of segments of the graph.

With MySpace, I was trying to identify the point where I thought the site was going to unravel. When I started seeing the disappearance of emotionally sticky nodes, I reached out to members of the MySpace team to share my concerns and they told me that their numbers looked fine. Active uniques were high, the amount of time people spent on the site was continuing to grow, and new accounts were being created at a rate faster than accounts were being closed. I shook my head; I didn’t think that was enough. A few months later, the site started to unravel.

This is exactly what is happening to Reddit currently. The most passionate contributors, the most tech-literate users, and the integrators who make all the free tools in the ecosystem around reddit which makes that service much more valuable have left and will never look back.

From the dashboards of u/spez however, things might looks great. Better even! As the drama around their decision making certainly caused a lot more posts and interactions, and the loss of the 3rd party apps drove at least a few users to the official applications.

But this is an illusion. Like MySpace before them, the metric might look good, but the soul of the site has been lost. It’s not easy to explain but since I’ve started using Lemmy full-time, I’ve seen the improvement in engagement and quality in real time. half a month ago, posts could barely pass 2 digits, now they regularly break 3 and sometimes 4 digits. And the quality of the discussions is a pleasure to go through.

I said it before, but reddit was never a particularly good site. Their saving grace was the openness of their API and their hands-off approach to communities. The two things they just destroyed. It’s those 3rd party tools and communities that made reddit like it is. As as the ecosystem around reddit sputters and dies, the one around the Threadiverse is progressing in an astonishing rate.

Not only are the integrators coming from reddit aware what kind of bots and tools are going to be very useful, but a lot of those tools are shut off from reddit and switched to the lemmy API instead, explicitly cannibalizing the quality of the reddit experience. And due to the completely open API of the Threadiverse, those tools now get access to unparalleled access and power.

Sure if you visit reddit currently, you’ll see people talking and voting, but as someone who’s been there from the start, the quality has fallen off a hill and is reaching terminal velocity. But it feels like one’s still flying!

Not just the quality of the posts where only the most superficial meme stuff can rise to the top, not just the quality of the discussion, but even mere vibe of the discussions is just lost.

There’s now significant bitterness and hostility, especially as the mods who were responsible for maintaining the quality, have gone or are being hands off or just don’t have the tools needed to keep up. I’ve heard from multiple people who are leaving even while they were not originally planning to, because the people left over in reddit are just so toxic.

This is a very vicious cycle which will accelerate the demise of that site even further.

A house fire can go from a spark to a raging inferno in less than a minute. The flames consuming reddit are just now climbing up the curtains and it still appears manageable, but it’s already too late. Reddit has reached terminal enshittification and the only thing left for it to do, is die.

Fediseer Streamlining

Just one day ago I released my initial release of the Overseer and I was annoyed by the implementation almost immediately. The requirement to register on another Lemmy instance using a custom username and wait for manual approval (which could also lead to someone sniping that username, and forcing me to manually delete it), and THEN register your instance on the Overseer was just too clunky.

While a few people did register, I realized almost immediately it’s very likely this would never take off. So I started rethinking how I could streamline this process so that it would require far less steps.

My initial plan was to simply register all available instances on the fediverse by default, and allow admins to claim them later. That would require me somehow being able to contact those admins directly. This led me to try to figure out the best way to do that where I discovered I could theoretically talk to any fediverse instance directly, and not have to rely on the a specific lemmy instance with all its limitations.

So I spent many many hours figuring out how to do that. The documentation is frustratingly sparse and incomplete on this point, with my best guide was this blogpost, from half a decade ago for a different language than python. Fortunately this day and age, I had access to ChatGPT, so I could ask it to translate the code on that page to python and then with some trial and error and digging in the official documentation, I had a working DM system for mastodon!

Then I set out to do the same for lemmy which is where the big frustration was waiting, as the documentation is practically non-existent, the messages are cryptic and Rust and the way it’s coded was completely impenetrable to me. Lots and lots of digging in the code, trial and error and asking around in desperation, and I managed to figure out that the main blocking point was a “type” key in my activitypub instance, instead of a fault in my payload.

Unfortunately figuring out how to “speak activitypub” took me the better part of my day. But the good news is that once I had that down, the rest was just a matter of time. I just had to refactor everything. And hell, while I am doing that, why not change the name and domain as well

And that’s how Fediseer.com is born!

I have already updated my previous post with the new workflow, but the big difference is that it completely removes the need for an extra lemmy instance one has to manually register. Instead one just has to “claim” their instance and they will get a PM directly in their mailbox!

Likewise, you can guarantee for a different instance even if their admins haven’t claimed it first and they will get a helpful PM informing them about it. If the instance doesn’t exist yet, all you need to do is search for it in the whitelist and it will be automatically added, and then can be guaranteed. My next step is to automatically import all known instances by pulling them from the federation, which should avoid the manual step of searching for them first.

So let’s see how it goes. Now with the ability to talk to fediverse instances directly, it also opens up some really fascinating doors for automation! Stay tuned!

Fediseer: A Fediverse Chain of Trust

Recently I’ve started running my own lemmy instance, as part of my decoupling from Reddit, due to them speed-running enshittification.

The instance has been growing nicely and holding up very well indeed. but there’s dark clouds forming on the horizon, as more of more of the early adopters and people with principles are leaving that service and are looking for alternatives.

The first signs of trouble appeared when I noticed that the top instances in the Fediverse Observer were growing by thousands of users per day, but had very little activity to speak of, with few threads and barely any comments. This was a clear sign of botted accounts being generated by the thousands.

My initial reaction was to cook up a quick REST API and a complementing script which would allow instance admins to quickly de-federate from instances with this amount of botted accounts, as it would point to an instance with insufficient protection, and those account could easily be used in the future to spam others. A small pre-emptive measure. It’s not particularly sophisticated, but I wanted to get something out before trouble occurs.

The Fediverse Fediseer was born, but I wanted something more. I feel like a big issue with the Fediverse as it stands right now is the same as email. It’s trivial for someone to set up hundreds or thousands fake fediverse domains and start spamming other instances. All it requires is that any account on those instances to follow their spam domains, to open the door. Sure, instances with manual user approval are somewhat more protected, but this approach does not scale well, and will lose us the opportunity to capture the people looking for alternatives outside of corporate control. Likewise any place with open registrations is available to botters to inject their fake accounts in order to follow their spam instances. In general, it’s a big problem to solve through de-federation alone.

My thoughts then was to find a way utilize the “whitelisting” capabilities of lemmy and the fediverse to ensure that only trusted instances are federated. But this has it’s own share of problems. Particularly, it’s also difficult to scale, and it easily excludes people running their own instances.

However the main benefit of whitelists, is that one does not have to be constantly vigilant. A dedicated bot network can spawn thousands of new domains and sleeper accounts to flood the fediverse and de-federation lists would easily grow exponentially to try and fruitlessly fight against this. A whitelist should theoretically remain relatively short and there’s natively protects against domain flooding.

So I wanted to come up with something that would both make it easy to compile and maintain whitelists, but also not lock out people from individual accounts.

After a day of work, I’m excited to unveil the new Fediseer functionality which works around a Chain of Trust!

This all works via a REST API (which I hope some enterprising people will make a fancy UI for it)

The first step is to use the Fediseer API to register your instance’s domain. You can do it from the provided interface, or use a curl call, But I hope in the future the community will develop way fancier UIs to handle this. You will need to provide a username which is an admin on the instance you want to claim (preferably yours).

If all goes well, the user you provided will then receive an API key via Private Message on their own instance, which they can then use to guarantee or endorse other instances.

A new account on the Fediseer however is not visible on the whitelist API by default. This is because only accounts which are “Guaranteed” by other instances are visible. The Fediseer starts with an core instance, the fediverse.com, which functions as the root for the Chain of Trust. Either this account, or another account previously guaranteed, will have to guarantee your instance, before it is available for the whitelist.

Using the Guarantee API endpoint you can now guarantee other instances with your own instance.

Once your instance is guaranteed, it also get its first endorsement from its guarantor, and now will be displayed on the whitelist using the default settings

This list can be exported also in csv format, for easy injection into lemmy whitelists

This naturally forms a network of trust with instances guaranteeing each other down the chain. The purpose of this guarantee is to prevent bad actors from sneaking in. So let’s pretend that a spam instance sneaks in, and starts causing trouble. What do we do?

Very easy, instead of having to waste time going around asking everyone to defederate that instance, and reminding new users to do likewise, we simply withdraw our guarantee for that instance. Once a guarantee is withdrawn, that instance is not anymore visible in the whitelist. Which means any servers which automatically pull and deploy the whitelist from the Fediseer, will automatically reject such instances.

But this goes even further. Let’s say someone pretend to be nice in order to start letting spam instances into the whitelist. When people shout about it, they say the right words and withdraw their guarantees, but keep adding new ones in. Eventually, we’re going to notice that all guarantees for spam seem to be coming from the same instance. When that happens, we just withdraw our guarantee for that instance which does 3 things. A) It prevents that instance from being in the whitelist. B) It prevents that instance from guaranteeing or endorsing others C) It removes the instances lower in the chain branch of that instance as well! So if instance A faked being nice, then guaranteed for 20 spam instances once in, all it would take for those 20 spam instances to be removed from thew whitelist, for the guarantor of instance A, to withdraw their guarantee! If they don’t want to do that, then whoever guaranteed them might withdraw it, until we find someone who does.

Now, I won’t claim that this system is perfect. Human nature being what it is, I expect power groups will form which might not agree with who else is guaranteed. This is where the fediseer being FOSS helps. If there’s a core disagreement between big groups of fediverse projects about who should be guaranteed in the first place, I expect other Fediseers to spawn with their own Chains of Trust which are more or less strict than other. An instance could very well be registered to multiple Overseers and thus be part of different whitelists. I am perfectly aware that I will not be able to satisfy everyone limits but I hope to provide a tool that can!

The main point here is to create the system which can start building Chains of Trust, which have a manual human control but are easy to adjust as the environment changes.

There’s more here that I haven’t mentioned yet, such as the endorsements, where guaranteed instances can endorse others, and anyone can set their whitelist to require more of less endorsements. Or being able to whitelist only instances with endorsements from another instance. Or how the Fediseer will PM you with changes to endorsements and guarantees etc

There’s plenty of ideas to add. This is merely the first step. And I’d love you all to help me take more of them!

Reddit worked despite reddit.

I visit Reddit all the time. And I visited Digg before that. In fact I was hooked to this mode of operation since Digg. Suffice to say, something about link aggregation tickles my ADHD brain just right.

However with the recent blackout of a big part of reddit, I decided to start my own Lemmy instance and use Lemmy primarily instead through it. Since I’ve started this experiment, I feel any urge to visit Reddit for my “fix” less and less. I have some thoughts about that.

In Twitter Vs Mastodon AKA “micro-blogging”, The value was in the specific people one followed which made it way harder to switch services because one was help back by other people. I.e. the people kept each other locked-in. Similar to how Facebook keeps everyone locked-in their walled garden because it’s the only social media their parents and grandparents managed to learn to use.

In Reddit however, the value is all about the specific forums, or “subreddits”, in lingo. The specific people one was talking to, never really mattered. What was important was the overall engagement general sense of shared-interest. This has always been the core strength of Reddit, and its early pioneers like Aaron Schwartz understood that.

This is why the minimalist reddit of old, managed to dethrone Digg when the latter decided that its core principles wasn’t user-curated content, but linkspam. The people who migrated into Reddit made what it is today, by creating and nurturing their communities over years.

Any beneficial actions by reddit itself have been either following what the community was already doing (such as adding CSS options or on-boarding the automoderator bot), or forced by bad optics, such as when they were forced to finally ban /r/coontown, /r/fatpeoplehate, /r/jailbait (which their current CEO moderated btw) etc.

The community and the people who run the subreddits have always had to make the minimalist options allowed to them work. They had to develop their own tools and enhancements, such as RES, and Moderator Toolbox, while Reddit couldn’t even provide much requested functionality to counter the known abuses of cross-subreddit raiding.

Instead, Reddit focused on adding useless features nobody asked for like NFT. On the usability, the new look was their push to take the site more towards generic social media network, with friends, follows, awards and avatars, and instead of focusing on their core product: Link aggregation and discussions.

In fact, any action they took, was laser focused on social-media lock-in and extracting wealth and adding features which people didn’t care for, which is why most third party apps simply ignored all that stuff.

Through all this, their valuable communities kept fighting against reddit management’s pushes so that they could do what was right, even if some lost that fight, like /r/AMA which became but a shadow of its former self when the cowardly owners fired their low-level employee leading its success, and scapegoated their then female CEO for it.

Eventually though something had to give, and reddit seems to have realized that their users are too stubborn to simply accept the new paradigm they designed for them where they watch more ads, buy more reddit gold and get addicted to NFTs. And 3rd party apps enabled users to use the valuable part of reddit and skip the enshittification all too easily.

So they had to go. And here we are.

Unfortunately for reddit, since the core value of reddit has always been the links, and the discussions around said links, instead of specific people and a social network around them, it is stunningly easy to jump ship. It doesn’t take a lot to keep a community going on Lemmy instead of Reddit. All it needs is a handful of dedicated people to keep finding and posting links, and the discussions and memes will easily follow.

I don’t need to know that I know the links are coming from Gallowboob, in fact, I never cared who posted the links or started the discussions. Reddit has had the “friends” feature for close to a decade now, and I have “friended” less than a handful of people. There’s literally nothing holding me and most people back except our existing routines.

There is of course still a lot of momentum in reddit communities, and a lot of mods who really don’t want to lose their status. Nevertheless, I’m finding I’m not actually missing much by staying exclusively on lemmy atm and I see a lot of people are realizing the same thing increasingly fast. The finality of the loss of major apps like Apollo, RIF and Sync has already been the final nail for a lot of people.

This exodus might already be unstoppable unless reddit completely capitulates and goes back on their API plans. But I don’t hold my breath on this.

Feel free to come and hang out at the Divisions by zero lemmy instance btw. We’ll do fun things!