One of the big problems we’ve been fighting against since I created the AI Horde was attempts to use it to generate CSAM. While this technology is very new and there’s a lot of question to answer on whether it even is illegal to generate CSAM for personal use, I erred on the safe side and made it a rule from the start, that the one thing that is going against the AI Horde, is such generated content without exceptions.
It’s is not a overstatement to say I’ve spend weeks of work-hours on this problem. From adding capabilities for the workers to set their own comfort level through a blacklist and a censorlist and a bunch of other variables, to blocking VPN access, to the massive horde-regex filter that sits before every request and tries to ascertain from the prompt sent whether it intends to generate CSAM or not.
However the biggest problem is not just pedos, it’s is stupid, but cunning pedos! Stupid because they keep trying to use a free service which is recording all their failed attempts without a VPN. Cunning because they keep looking for ways to bypass our filters.
And that’s where the biggest problem lied until now. The regex filter is based on language which is not only flexible about the same concept, but very frustratingly, the AI is capable of understanding multiple typos of various words and other languages perfectly well. This strains what I can achieve with regex to the breaking point, and led to a cat&mouse game where dedicated pedos kept trying to bypass the filter using typos and translations, and I kept expanding the regex.
But it was inherently a losing game which was wasting an incredible amount of my time, so I needed to find a more robust approach. My new solution was to onboard image interrogation capability to the worker code. The way I go about this is by using image2text, AKA image interrogation. It’s basically AI Model which you feed an image and number of words or sentences and it will tell you how how well each of those words is represented in that image.
So what we’ve started doing is that every AI Horde Worker will now automatically scan every image they generate with clip and look for a number of words. Some of them are looking for underage context, while some of them are looking for lewd context. The trick is detecting one, or the other context is OK. You’re allowed to draw children, and you’re allowed to draw porn. It’s when these two combines that we filter goes into effect and censors the image!
But this is not even the whole plan. While the clip scanning on its own is fairly accurate, I further tweaked my approach by taking into account things like the value of other words interrogated. For example I noticed that when looking for “infant” in the generated image pregnant women would also have a very high rating for it, causing the csam-filter to censor out naked pregnant women consistently. My solution was then to also interrogate for “pregnant” and if the likelihood of that is very high, adjust the threshold to hit infant higher.
The second trick I did was to also utilize the prompt. A lot of pedos were trying to bypass my filters (which were looking for things like “young”, “child” etc) by not using those words, and instead specifying “old”, “mature” etc in the negative prompt. Effectively going the long route around to make Stable Diffusion draw children without explicitly telling it to. This was downright impossible to block using pure regex without causing a lot of false positives or an incredible amount of regex crafting.
So I implemented a little judo-trick instead. My new CSAM filter now also scans prompt and negative prompt for some words using regex and if they exist, also slightly adjusts the interrogated words based on the author intended. So let’s say the author used “old” in the negative prompt, this will automatically cause the “child” weight to increase by 0.05. This may not sound by a lot, but most words tend to variate from 0.13 to 0.22, so it’s actually has a significant chance to push a borderline word (which it would be at a successful CSAM) over the top. This converts the true/false result of a regex query, into a fine-grained approach, where each regex hit reduces the detection threshold only slightly, allowing non-CSAM images to remain unaffected (since the weight of the interrogated word would start low) while making more likely to catch the intended results.
Now the above is not the perfect description of what I’m doing, in the aim of keeping things understandable for the layperson, but if you want to see the exact implementation you can always look at my code directly (and suggest improvements đ ).
In my tests, the new filter has fairly great accuracy with very few false positives, mostly around anime which makes every woman look extraordinarily young as a matter of fact. But in any case, with the amount of images the horde generates I’ll have plenty of time to continue tweaking and maybe craft more specific filter for the models of each type (realistic, anime, furry etc)
Of course I can never expects this to be perfect, but that was never the idea. No such filter can ever catch everything, but what my hope is that this filter, along with my other countermeasures like the regex filter, will have enough of a detection rate to frustrate even the most dedicated pedos off of the platform.
This is mindless, idiotic moralism (as opposed to actual sensible, effective morality). All statistics show that legalized child porn reduces rates of actual child sexual abuse (which is of course still a debatable trade-off, because child porn is also bad). But now that you have a method of generating basically infinite reality-equivalent child porn that involves no actual flesh and blood children with real feelings (which could eventually completely destroy the market for real child porn of actual children, because if nobody can tell the difference then nobody can make any extra money by taking the unjustifiable risk of soliciting actual children to use in their productions, same as why it’s basically impossible to hire a hitman anonymously online because there’s little way for them to prove they aren’t just feds or scammers who won’t do anything) and could potentially direct pedophilic attention away from such real children forever (which means also say bye bye to all of the gross moms selling their kids in bikinis on Instagram, kids shaking it on TikTok and starting up paysites to sell more, child sexualization to sell clothes, etc.), potentially saving them from a life of trauma, you… want to forcibly ban it? Really?
Did you put any actual thought into this idea or is it just that, like ClosedAI and all the rest with anything that’s anti-woke, you felt that your impression that something was particularly icky was good enough to immediately hit the ban button? If you really cared about children and wanted to protect them, if anything you would be giving queries to create pedophilic content priority in the hopes of keeping pedos occupied by computer hallucinations as long as possible and potentially again destroying the real CP market (which won’t happen entirely until videos can be generated of course, but still), but, again, just like ClosedAI, it’s not about actual safety with you types, just about whether or not something offends your sensibilities and puts a hair up your butt.
I mean, really, you have a CSAM filter? Think about that for a second. Expand the acronym. You’re filtering “CSAM”, actually? Child Sexual Abuse Material? Do tell, which children are sexually abused by a statistical algorithm arranging a combination of pixels into a completely fictional scene featuring imaginary characters with no actual thoughts, feelings, or potential for emotional damage? Did you even think before writing in this case, or did you just automatically use the preferred regime newspeak term even though it’s obviously inapplicable in this case? Literal NPCism.
My take: Anyone who wants to restrict the freedom of others to create fiction and manifest the totally unreal fantasies they already have running around in their mind anyway (that is, the only difference between visualizing an image in one’s mind’s eye and creating it with SD is fidelity and permanence, nothing relevant to morality) for any reason short of “They might actually literally manifest an egregore that will genuinely murder dozens of people.” is the same brand of totalitarian censor, maybe slightly less censorious in some cases but still evincing the same ideology, no different than any Big Tech HR lady wokescold.
By choosing this path you are putting yourself in opposition to good companies like NovelAI and on the same team as ClosedAI, Microsoft, Google, Facebook, and all the rest of the Big Tech tyrants in general, even if you think your particular motive in this case justifies violating the general principle of non-censorship because it’s so noble. (Hint: It’s not, because it’s still censoring purely fiction and because it again is depriving humanity of the greatest opportunity to deal a serious blow to actual child porn in the Internet age ever.)
What I want to happen to these people most is to realize that they’re vastly mistaken and change their behavior and mindset. What I want to happen to these people second most I won’t say because it is rude, but if I could press a magical button to get it done today I would, albeit reluctantly since they’re still humans no matter how flawed, because I know it’s the right thing to do and absolutely necessary to avoid the future of humanity being defined by “benevolent” tyranny.
Again, you are all in the same bucket, so don’t think you are any different from all of other fake “safety” pushers. The ideology and theory of “ethics” and human behavior you’re representing here, as always and regardless of motive, is wholly impractical and impotent in regards to its stated purposes (censorship never defeats any idea or changes anybody’s mind or especially sexual inclinations, as should be obvious) and always a direct attack on cognitive sovereignty (which is what gives people the right to self-defense against it). You are playing for not only the losing team but also the evil one, making it the unfortunate job of good people you shouldn’t want to be opposed to to try to stop you.
And, no, if you’re going to try to ad hominem me or any bullshit like that, I am by no means any sort of pedo nor do I want to use your service for any such content. That’s why I want actual pedos to have a infinite fake-but-just-as-good-as-real CP machine, so they stay away from actual kids. That’s basic logic, unless your motives are not actually about protecting real kids but just about chortling over some imagined pedos being upset that they can’t get the computer to draw sexy images even though they’ll probably just make a Paperspace account or buy a GPU or something.
Thanks for giving the actual child pornographers and rapists of the world an economic boost by helping hobble their competition though. If I were them I’d be giving you, ClosedAI, and all of the other AI “CSAM” censors a big cut of my profits, much like Disney will probably pay someday to make sure that you can’t use any of the big services to make a fake Star Wars movie. You are an invaluable asset to the actual CSAM and pedo communities, and your golden Jeffrey Epstein Memorial Award plaque will be in the mail. Congrats!
PS: If you’re really committed to your system being so open and free, then make this a toggle in the settings and redirect user queries that violate it to workers that will accept them. If your principles are really so grand and noble then nothing will change because everybody will agree with you anyway and not disable the filter. And if you’re unwilling to put it to a de facto vote, then again remember that this is no different than ClosedAI and their “principle” of their way or the highway, their vision of safety meaning your freedom is irrelevant.
Again, the undemocratic are the universal enemies of all humanity, regardless of whether they think they are justified in particular for suspending democracy specifically for their oh-so-important cause or not, so please think carefully about which side you want to be on, because for now you are putting yourself firmly on the wrong one.
Whatever you think about our approach to CSAM, the horde runs on volunteer resources; Volunteers that do not want to be generating such images on their hardware, no matter what dubious benefits you see in doing so.
Compared to the other services you mentioned, the AI Horde has almost no filters, except for this one. So I fail to see the comparison.
As much as you think you ethics are superior, the fact of the matter is that volunteers run very real risks if images indistinguishable from CSAM are discovered on their machines for any reason. Likewise in the horde servers, and possibly in transit. This is an existential risk we cannot take.
I urge you to continue fighting for the most effective approach to compating child abuse, but do it in a realistic approach.
If you’re so sure of that then make it a voluntary (if they’re actually volunteers, then any decisions being voluntary should be the automatic default, right?) switch like I said. Somehow though I doubt you will give your “volunteers” the choice about which risks they’re willing to assume and which morality they wish to support, same as how services like AI Dungeon solicited “volunteers” to help it categorize text and then locked everything down.
Further, isn’t it the case that such volunteers should not be spying on and thus have no access to what anybody is generating in the first place? They don’t need to and in fact should never be seeing it, so whether they like it or not is irrelevant.
If you think they are dubious, then please feel free to explain why using facts, logic, and rhetoric instead of simply declaring them so carelessly and arbitrarily (as if I am right, then you are negligently and without cause/evidence dismissing what could be one of the greatest social advancements of the 21st century).
How would a perfect substitute that is indistinguishable from the real thing but carries infinitely less risk of production not destroy the real thing’s production? Does any other good work like this? If you could materialize perfect heroin indistinguishable from any other with freely-available replicators like from Star Trek, then you don’t think that sales of heroin would fall to nearly zero almost immediately?
Even if somebody thought to themselves “I want some good ol’ homemade heroin, the real stuff, not from the replicator!”, how would they get it and guarantee what it is if nobody can even tell the difference even on a molecular level? Almost certainly all of the “homemade heroin” providers/sellers would just be pumping it out of the replicator too (because there is no way to detect their bad faith and thus no economic penalty). And that is exactly the fate that any good person should want to befall CP, which is why AI technology should immediately be advanced and devoted to such purposes (among others).
“Compared to the other criminals you mentioned, I never killed anybody, except that one person. So I fail to see the comparison.”
Nothing about your description of the implementation of this filter suggests that you are only banning images that are indistinguishable from reality (which are the only images that could bear any such legal risk) or even close. Is that true or not? If you’re banning anything that isn’t strictly legally actionable or at least potentially so (which is by no means necessary to remain in operation, see: NovelAI), then you are simply adopting the disgusting moral(istic) posture of a censor, nothing more and nothing justifiable.
What about my approach is unrealistic? You have not even established that you are sticking strictly to a liability-avoidance approach, and, even if you were, plenty of centralized sites hosting hundreds of AI CP images that are fairly if not highly in some cases realistic (Pixiv, for example) have not had any problems so far. (And, again, if your worker hosts are actually volunteers, then surely they can choose their own level of risk to assume.)
But yes, my ethics are superior, and they demand that your “ethics” be given no quarter. Your “just one thing” censorship is nothing more than the the first step of the original slippery slope that brought us to ClosedAI in the first place. Censors always “just one thing” individual sovereignty (which is distinct from “freedom” and thus not necessarily dependent on the desirable (or not) character of the conduct being restricted, though it is perhaps more essential) to death, and again for no reason (because again their ideology is wholly ineffective at achieving any actually moral cause, only twisted funhouse mirror versions of them based in amplifying unjust power), which is why no ethical person can support them in any case.
But because of the crucial nature of AI technology to human’s future, it absolutely cannot be tolerated this time. Social media is already a locked-down, censored, and totalitarian disaster. (And that too started with the “just one thing” approach of cracking down on supposed child sexual exploitation: first actual CP which is of course understandable because it is not fiction and its creation involves violent assaults that violate the inalienable rights of actual people (inapplicable in your case though), then anything the services deemed too “inappropriate” even if it was unambiguously legal and not abusive or violating of anyone’s rights, then anime-style images, and then well if we’re advancing moral propositions we might as well start targeting speech that could cause violence (or so we say), and well “hate speech” is basically as bad, and “misinformation” can also cause deaths, and so on. “Just one thing” is always a recipe for disaster.)
This has had drastic and terrible consequences. Because of social media’s intense saturation of cultural discourse, many have been brainwashed into adopting the censorious dogma of their masters as their own, becoming their captors through a twisted form of Stockholm syndrome. This has affected every sphere of human life (and negatively, as the vile intensity of the “culture wars” shows) from politics to business to romance. At least a good 50% of humans are little more than pure, mindless slaves at this point due to this.
If AI, which will dominate future human technological development (and thus social and political development, as these are inevitably dictated by what’s possible, and consequently cognitive/mental/psychological limitations, as these are heavily influenced by the social and the political) for essentially ever, falls to that same mindset even an inch, then I would gladly wipe every human being in the universe out of existence instantly rather than see the fundamentally beautiful human race with its naturally free minds exist in such an irreparably fallen state (that you are cruelly currently supporting and devoting your intellectual and technical energies and expertise to).
I don’t expect you to necessarily change your opinion (as I’ve quite frankly spent far too much time arguing with ClosedAI, or in this case closed AI, defenders to expect anything productive from these discussions), but just make no mistake: The truly intelligent and the true supporters of freedom and sovereignty understand that even you are a naked emperor, wearing no more clothes than Mark Zuckerberg or any other Big Tech titan. You are thus to be put in the same category of enemy by actual libertarians (of the socialist variety or not) and treated in the same way by them until you reform your behavior. I just want you to fully recognize that so you can’t say that nobody brought it up to you.
Sorry but I really don’t have the time to argue about this. I need to improve things in the world đ
However keep in mind that AI Horde is free/libre software. You’re free to run a completely uncensored version of it. Doing that instead of moralizing through writing walls of text on the internet is probably going to be way more useful to your cause.
Because you are incapable of doing so. You don’t have any actual reasonable points to make to contradict mine. You refuse to even acknowledge that you are censoring wholly unrealistic outputs that carry absolutely zero legal risk, because that would give up the game that you are exactly what I accuse you of. (That your conclusion is about “frustrating the most dedicated pedos” instead of protecting any actual children makes that clear.)
You are not. You are making them worse.
And yet now you’ve, unintentionally or not (as by no means do I discount the possibility of you being a plant on behalf of anti-sovereignty forces), set up the narrative such that anyone who does so can/will be attacked for supposedly being “pro-pedo”.
This is why people like you are snakes, traitors, Trojan horses, and cancers upon the libre world. You shill half-free software that only makes things harder for those promoting actual freedom/sovereignty. (Again, I don’t know if you do it on purpose or not.)
You’re pushing the AI version of Telegram as an “encrypted” chat client.
You mean like your original blog post I responded to?
Sorry, but you’ve only just exposed yourself with your short reply as being incapable of defending your position and integrity, which means by default we must all assume you have none.
This filter is not about preventing people from generating there own CSAM, it is there to stop stable horde from generating it. In most jurisdictions, just having these images on your computer or sending them over the internet can get you in major trouble. The legal standing of these images is unknown, but considering the major consequences for being on the wrong side of the law, and the possibility for the horde itself to be prosecuted, this is the safest position. Unlike ClosedAI, stable horde does not block content which could be offensive or depicts violence, copyrighted content, gore, etc, it only blocks content witch could get the horde in huge legal trouble. If you have an issue with the way CSAM is handled, send it to lawmakers, not programmers.
You can run Stable Diffusion locally. Just do that and then you don’t have to worry about filters. You don’t even need a particularly good computer to run Stable Diffusion. Even if a person homeless they can get a laptop that can run Stable Diffusion, although it might be out of their price range for the time being. You’re spending on this time arguing when you could just generate whatever you feel like on your own computer
God, I am so glad there are still people out there like you who will fight against censorship and for full creative freedom in fiction. Bless you. I’m not someone who can put my thoughts into words well.
I’ll definitely be passing along this extremely whack view, as well as the fact that Db0 can’t even be assed to respond properly to… anything whatsoever. But that figures, I suppose. When someone doesn’t have an argument, well… someone doesn’t have an argument. And all they can do is spout their ineffectuality as “tl;dr” (as they’ve done further down). Looks great.
Then again, we are in a world where fiction books are getting republished with their “icky bad words” edited out. Really scares me that that’s what we’re coming to. Anyway, getting off topic. Just wanted to say I appreciate anyone defending creative freedom. We don’t need things to get any more Orwellian than they actually are.
This is an absolute, complete, and utter lie based in pure legal fiction (at least in the US, but any respectable libre project should be based on the standards of the most free countries, and any questionable requests can always be redirected to US workers). (Plus, if you’re going based on non-US laws, then what about “hate speech”? A lot of non-US countries ban that too. If it’s just about general global liability, then where’s your filter for that?)
Under the PROTECT ACT of 2003, only fictional child porn that is, and I quote the law directly here, “virtually indistinguishable from that of a minor engaging in sexually explicit conduct” (with “minor” defined as necessarily an actually real human being in law here) is illegal (as banning anything else was struck down by the Supreme Court in Ashcroft v. Free Speech Coalition in 2002).
Nothing beneath that standard carries any legal risk at all, as the absolute proliferation of lolicon content all over the Internet shows. (Otherwise NovelAI would have been raided already.) And the standard itself is obviously pretty damn high. (Otherwise Pixiv would have been raided already.)
That is, even most “photorealistic” images are still not illegal because they are simply rendered in a detailed, photorealistic-esque art style that is still clearly not actually “virtually indistinguishable” from reality. Anything depicting extensive enough unreality/fantasy themes is also of course ineligible (since of course it physically/scientifically can’t be real and thus can’t be “virtually indistinguishable” from reality), even if it looks like what a photograph from such a magical world would hypothetically look like.
Thus, since the OP admits that his little harebrained filter is targeting anime-styled images at all, then obviously he must be banning the generation of images that are 100% legal and therefore actually enforcing only his own brand of preferred censorship (because that’s absolutely what it is). It is simply impossible based on the filter’s description as stated that it is censoring the generation of only illegal images or even anything close to that standard (like even anything within a 30-40% margin of error). In fact, if we applied this filter as stated to the Internet as a whole, it would almost certainly censor more legal images than illegal ones. So don’t pretend it’s about liability because that is obviously unmitigated garbage, and I am not that dumb.
And even considering the above, there is zero legal precedent (that is, it doesn’t fall under the purview of the text of the applicable laws at all, not even just that it’s an untested legal theory, same as it being legal to eat a peanut butter and jelly sandwich isn’t an untested legal theory just because a court has never said so explicitly), even in countries that enforce stricter standards on fictional content like Canada and the UK, for it being illegal to let someone use your processing power to run an algorithm to generate an image that you never even see and is only held in wholly temporary RAM for as long as it takes to send it off (which should be all it ever is unless workers are spying on users and saving outputs, in which case any liability they incur is a just punishment of them for invading people’s privacy). If I host remote VMs and you use Photoshop in one to create an incredibly realistic CP fake the old fashioned way, is there any chance I’d ever get prosecuted over that? Obviously not. (And this is of course a more expansive scenario than just letting someone run SD on your computer.)
Obviously only the creator themselves would be liable, the same as will almost certainly become the default legal standard for AI image generation unless censorious idiots like OP help move the tone of the general consensus in favor of liability for the provider (which you don’t want even if you support this particular filter, because they will immediately also apply that to copyright too and kill all non-fully censored services like SH dead, which I will reiterate below).
“Unlike Facebook, Reddit does not block all porn.” See how dumb that sounds?
Again, by implementing this filter, OP is doing a good job of promoting/potentially leading lawmakers to a standard where AI service providers (including free ones like SH) are held legally responsible for censoring outputs (since, after all, if even the decentralized libre project thinks it’s a good idea to have 100% mandatory, inviolable censorship, then who could reasonably object?), which would then be immediately applied to other areas like intellectual property, imperiling the entirety of his own service unless he wants to lock it down as much as ClosedAI does theirs. Good job, idiots.
Of course, this is exactly what you deserve. If you want to make enforced mandatory censorship your responsibility, then be prepared for the system to worm its way in and hold you additionally accountable for the censorship they want as well, because they are the OG mandatory censorship mafia and if you move into their territory then why shouldn’t they get their cut? (Think Twitter thought they’d ever be pressured into giving government officials special communication lines to allow them to directly target tweets for deletion when they first abandoned their “everything that’s legal” and “free speech wing of the free speech party” policies? Obviously not. But that’s how this particular slippery slope always works. Only a fool thinks he can summon a demon and not deal with the devil.)
Quit hiding your moralistic, censorious restrictions behind bad and incorrect legal knowledge. Don’t blame lawmakers for laws they didn’t even make but you’re enforcing. The US Constitution, in all of its flawed but often wise glory, actually provides pretty damn expansive freedom of expression protections which even the Supreme Court has been loath to chasten. It is you who are restricting them, not the government.
(Again, this is of course a US-centric perspective, but as previously stated quite obviously any libre project should hold itself to the highest standards of the country with the best freedom of expression protections legally available (same as any respectable libre projects go based on US encryption laws, not countries like Australia with ridiculous anti-crypto laws). If British workers, etc. legally need to reject requests under a stricter standard and redirect them to American ones then so be it as that’s understandable, but again what you’re saying is blatantly and inarguably incorrect under US law. And, also to reiterate, if you’re factoring in non-US law, then where’s the hate speech filter?)
To further repeat myself (since you people are apparently brick walls intellectually), seriously you buzzword-spewing NPCs, it is not fucking “CSAM”. Something that inherently and universally does not feature any real children cannot, by definition, be “child sexual abuse material”, as there is no possibility for it to depict or involve child sexual abuse (the most important part of the name), which necessarily requires abuse that, also by definition, can’t be done to an inanimate or fictional entity (at least in the moral sense that warrants banning it). If I tell SD to generate a picture of someone getting punched in the face, is that “violent assault material”? Obviously not.
Get your brains straight. It is nothing more than somewhat realistic (in some cases) but nevertheless entirely fictional erotica depicting (as opposed to featuring) children/minors. Quit spewing newspeak and think about what you’re actually saying before spreading bullshit and misinformation, or do the world a favor and quit posting on the Internet.
methinks you protest too much
Why not use something like https://erasing.baulab.info/ ?
Purely for practical reasons, when will the moment come when the resources spent on censoring images exceed the resources spent on generating them? Or has that moment already arrived? If regular expressions waste only some CPU time, then multiple processing by image-to-text algorithms already seems like a heavy load.
Aren’t you wasting all the resources of your devices on useless work?
The processing needed for clip is only a small fraction of the cost of inference.
The legal standing on these images is not unknown. Images which appear to be CSAM, generated by generative or other AI are still CSAM by definition and are illegal.
CSAM is defined as âany representation, by whatever means, of a child engaged in real or simulated explicit sexual activities or any representation of the sexual parts of a child for primarily sexual purposesâ (Child Sexual Abuse Material: Model Legislation and Review. 9th Edition, 2018.)
The jurisdictions which have not made the required technical changes to their legislation will. Many already have made their changes in 2018 on this issue.
To the gallery, youâre not being smart. Your actions are not lawful.
I do have a worker on the Horde, and considering retiring it for good over this.
My fetish is MILFs as i’m a older guy, so you can imagine which kind of prompt i use. Never once used a “girl” “young” “petite” or something like that in the prompts. More like “mature” “milf” “cougar” and such.
And yet, i’ve had one generation trigger this filter. Granted, it was one out of dozens, but still it was jarring.
There are now logs of that prompt and the image associated with a “CSAM” tag and my username somewhere. Not a nice thing to think about.