Character LoRAs - Want more character variety? Then check This List!
๐Ÿ‘ป Ghost Signatures! Want to know how to easily remove them? - Check Here!
Description

ZoinksNoob is pretty decent at show accurate style, but there were still plenty of issues to be fixed. Having two characters kissing was also as problematic as ever. Still, I think it turned out well.
Created using the ZoinksNoob model available on HuggingFace
If you want to support me:
BuyMeACoffee

safe17160 ai composition1168 generator:zoinksnoob436 prompter:tyto4tme4l181 rainbow dash3894 twilight sparkle5058 pegasus9348 pony24503 unicorn11903 g444127 bed5696 bed sheet256 bedroom2525 curtains800 duo8662 duo female2009 eyes closed2429 female48576 full body691 horn19733 indoors6555 kiss on the lips220 kissing734 legs in air330 lesbian1921 lying down5172 lying on bed857 lying on top of someone39 mare15556 on back2444 on bed3060 pillow2067 ship:twidash50 shipping3686 show accurate837 side view520 unicorn twilight1078 window2834

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide

Hmm I see. Will have to try Forge out since I never seen it. I been using ComfyUI through inference cause it felt the most straight forward/friendly. Tried stability diffussion webUI too but that was giving me horrible results in general for some reason.
Thanks for the answer!
tyto4tme4l

Something of an artist
@Montaraz13
Yes, it matters a lot. I use Forge WebUI (also via Stability Matrix), I know inpainting is more tricky in ComfyUI. I use options similar to the ones in the screenshot below. Important things to note:
  1. Masked content - what should be used under the mask, I always leave it at โ€œoriginalโ€ to use the original image
  2. Inpaint area - โ€œWhole pictureโ€ for major elements dependent on other things around it (limbs, patterns, etc.), โ€œOnly maskedโ€ for small, detailed, independent elements like eyes, cutie marks, jewelry, etc.
  3. Denoising strength - how much do you want to change the original masked content, I usually set it to values in range of 0.2 - 0.4, depending on the situation
  4. Sampling method, resolution, CFG Scale - leave them the same as in the original image
Yeah inpainting is the big one I used before in NovelAI. I had mixed results with local AI though. Often it will just fill the selected zone with blurred colors or not connecting things at all.
There is no extra addon or step you do besides painting the zone you want remade right? (Also iโ€™m doing it on Inference/stability matrix UI for ComfuUI, not sure if that matters)
tyto4tme4l

Something of an artist
@Montaraz13
I use inpainting a lot, quite often combined with crude manual editing in GIMP. At the end of the creation process, I also manually clean up any imperfections such as artifacts and discolored inpainting areas.
Inpainting is very powerful and easy to use, you basically tell the model to fix the selected area and most of the time it works great.
That is really nice. I love show accurate when it is pulled out well. Do you fix things yourself in photoshop and such or do you use some kind of process with the AI itself? I heard about several stuff for fixing errors but have yet to figure them out