depth2img now available on the Stable Horde!

Through the great work of @ResidentChiefNZ The Stable Horde now supports depth2img, which is a new method of doing img2img which better understands the source image you provide and the results are downright amazing. This article I think explains it better than I could.

See below the transformation of one of my avatars into a clown, a zombie and a orangutan respectively.

To use depth2img you need to explicitly use the Stable Diffusion 2 Depth model. The rest will work the same as img2img. 

Warning, depth2img does not support a mask! So if your client allows you to send one, it will just be ignored.  

If you are running a Worker you can simply update you bridge code and you must update-runtime as it uses quite a few new packages. Afterwards add the model to your list as usual. 

We recently also enabled diffusers to be loaded into voodoo ray, so this will allow you to not only keep the depth2img in RAM along with other models, but also the older inpainting model! Please direct all your kudos to @cogentdev for this! I am already running both inpainting, depth2img, sd2.1 and 15 other 1.5 models on my 2070 with no issues!  

If you have built your own Integration with the stable horde such as clients or bots, please update your tools to take into account depth2img. I would suggest adding a new tab for it, which forces Stable Diffusion 2 Depth to be used and prevents sending an image mask. This is to avoid confusion. This will also allow you the opportunity to provide some more information about the differences between img2img and depth2img.  

Enjoy and please shower the people behind the new updates with Kudos where you see them!