Welcome back!
Posts
Search Results
Creative Corner » I've Been away for about a year... » Post 2
Oh my fluffy goodness… I haven’t used YiffyMix (thru Noobai now I guess) since before I hopped onto the Pony Diffusion boat to experiment SDXL Models… I immediately see the appeal! >w< I actually really loved Yiffymix early on… to see that it has grown so strong is incredible… i’ma have to tinker more to see if it’s worth shifting my loyalty from the Nova Illustrious Models…>w> and I’ll have to look up Your ZoinksNoob later…
I really tend to use Google’s Nano Banana for inspiration and some prompt writing.. sometimes i’ll have it pre generate me a sfw image to bring into img2img… I do wish some of these commercial platforms like Dall E and Nano Banana to loosen up a bit and let us enjoy the nsfw lol
But seriously thanks for the call out on the model and the welcome back! I would have been slow to notice it otherwise! I will see what I can add to the community… at least for today.. I am trying to work on one of my greatest weakness… wholesome innocent safe for work art… gulps
Creative Corner » I've Been away for about a year... » Post 1
Hi, welcome back. I don’t think a lot has changed. I think most people are using Noob-based models now (I’m using ZoinksNoob). Don’t bother with PonyV7, it’s been dead on arrival. Google’s Nano Banana Pro and Nano Banana 2 give a ton of possibilities, but trying to use them for free can be problematic.
Personally, I would love to see something unique and creative. Forge UI does support Regional Prompter, not sure why you seemingly can’t run it. You might try using Neo Forge or ReForge instead of the original.
Creative Corner » I've Been away for about a year... » Topic Opener
What have I missed? :D I have been playing around with Nova MLP XL V2 but I haven’t been able to touch any other Models outside the Nova Illustrious Family Lately… Been giving Pony Diffusion Models a break and can’t seem to get Pony V7 to work with the service I use…
Also I’m just generally curious what people wanna see! I know most of the people here can and will likely just create what they want with the power of the almighty AI lol but I would still love to cater to the community however I can! Though I am still pretty limited to mostly couple shippings and pinups since Forge UI can’t run Regional Prompter and that breaks my fluffy heart!~<3 Anyway… i’m back weather you know or cared about my existence before or not! Nyahahahah! >:D
Creative Corner » Diary of a prompter » Post 9
Chapter 9 added. This time about size issues :P
Creative Corner » [Tutorial] Replicating a generation from Stable Diffusion WebUI metadata » Post 3
There are many videos on YouTube about how to setup Automatic1111 WebUI and it’s forks already. All of them better than I could do.
Creative Corner » [Tutorial] Replicating a generation from Stable Diffusion WebUI metadata » Post 2
I use Fooocus, but it doesn’t produce as “clean” a result as Stable Diffusion, but Stable Diffusion is very difficult to understand
Creative Corner » [Tutorial] Replicating a generation from Stable Diffusion WebUI metadata » Post 1
It would be great if you could record a couple of tutorials on Stable Diffusion and its configuration
Creative Corner » [Tutorial] Replicating a generation from Stable Diffusion WebUI metadata » Topic Opener
I got a comment on one of my images recently asking what AI I use, which I assume to mean what model. I figure this is as good as a time as any to explain how you can regenerate most of the images I post here from what’s in the description.
Let’s use >>59493 as an example:
simple background, evil, (looking at you), smirk, unicorn, hat, outfit, flawless, sparklemoonilll <lora:Flawless_Sparklemoon_tamers12345:1>, countershading, high quality, detailed, solo, portrait, close up
Negative prompt: lips
Steps: 20, Sampler: Euler a, Schedule type: Automatic, CFG scale: 4, Seed: 4016617097, Size: 1080x1080, Model hash: 9d73bac23a, Clip skip: 2, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 25.3.0, Lora hashes: "Flawless_Sparklemoon_tamers12345: 1032bc97f5da", Downcast alphas_cumprod: True, Version: v1.10.1
This is the same output Stable Diffusion WebUI can be configured to output in a txt file. The first two lines are positive and negative prompts and should be self explanatory. Below that is everything you need to know to regenerate the image.
When using Stable Diffusion WebUI yourself, you can paste the entire output above into the prompt field verbatim then press the ↙️ button to automatically setup almost everything. If there’s a setting WebUI doesn’t know how to setup it will appear in “Override Settings” section.
The only thing the ↙️ button can’t setup for you is the loaded checkpoint which you might not even have yet, so take note of the model and Lora hashes, in this case:
Model hash: 9d73bac23a,
Lora hashes: "Flawless_Sparklemoon_tamers12345: 1032bc97f5da",
Hashes are much better than model names or links for finding specific models. While links and tags can be convenient in the short term they are also subject to changes and errors over time. Hashes on the other hand are a mathematical guarantee that you have the identical model that was used in the original generation.
Since a lot of models used here come from CivitAI that would be a good place to start searching. CivitAI allows you to search for models by their hashes. Simply query the hash itself like this or this.
If you can’t find them on CivitAI you could try other sites like Hugging Face or even a generic web search for the hash. If you can’t find it anywhere it means one of the following:
- The model was private and never uploaded to the Internet
- The model once existed but has been nuked from the Internet
- The model was corrupted or modified by the generator in some way but was still able to generate an image
One last thing to know is even if you have everything to perfectly replicate the image, anomalies between AI stacks (Pytorch version, CUDA vs ROCm etc…) might still lead to different results but they should still be similar enough to be recognisable to one another and obviously, manual editing or inpainting that happens after the image was generated can’t be replicated. For example >>59493 has been uploaded with a transparent background.
Creative Corner » Tips for creating ponified and anthro Equestria Girls » Post 1
Try
COLOR body or COLOR fur instead of COLOR coat. The training data almost entirely associates coat with the article of clothing.Also, color prompting with lots of adjectives and obscure color names like
rose colored mane with lighter rose and grayish aquamarine streaks almost never works properly.Creative Corner » Tips for creating ponified and anthro Equestria Girls » Topic Opener
I was wondering if anyone had any prompt tips on getting consistent ponified and anthropized Equestria Girls characters in Prefect Pony or other generators without having to resort to LORA’s. My focus at the moment is the Shadow Five from Friendship Games, particularly Indigo Zap and Sour Sweet. A problem I’m facing is that when trying to prompt for the coat color, whether as ponies or anthros, it’s often generating images with them literally wearing coats. Another issue seems to be (with Sour Sweet) not properly assigning color to her mane, with for example the prompt for the “rose colored mane with lighter rose and grayish aquamarine streaks” translating to a completely grayish aquamarine mane and a rose cutie mark.
Any suggestions?
Creative Corner » need help finding AI Generator » Post 5
@Scarlet Ribbon
Really! That’s good / great to know and remember. I’ve been working on a trying to get an image (poses, positions, etc) under control and over saturation has been been becoming a huge problem for me.
(* If curious it’s the illustrious model Xavier*).
I’ve ended up trying so many things in Forge Neo trying to get it under control that I forget about the stuff like that.
Example being that some models, like Xavier, don’t need / like a VAE used with it, and it causes your image to be filled with JPEG artifacts all over the place.
Really! That’s good / great to know and remember. I’ve been working on a trying to get an image (poses, positions, etc) under control and over saturation has been been becoming a huge problem for me.
(* If curious it’s the illustrious model Xavier*).
I’ve ended up trying so many things in Forge Neo trying to get it under control that I forget about the stuff like that.
Example being that some models, like Xavier, don’t need / like a VAE used with it, and it causes your image to be filled with JPEG artifacts all over the place.
Creative Corner » need help finding AI Generator » Post 4
@derp621
‘High contrast mess’ usually means you have the wrong Clip Skip setting.
‘High contrast mess’ usually means you have the wrong Clip Skip setting.
Creative Corner » need help finding AI Generator » Post 3
- 20 to 30 steps
- 4 to 5 cfg
- euler (though most of the samplers i’ve tried work)
Creative Corner » need help finding AI Generator » Post 2
I’ve been trying to get this model working in my setup but it seems all I get is a high contrast mess. What are you guys using? I’m using Automatic1111 with ROCm.
Creative Corner » Diary of a prompter » Post 7
This is a great writeup. And it occured to me that I never messed with the lighting too much.
Creative Corner » Diary of a prompter » Post 6
I thought this got abandoned, it’s great to see some more updates.
Creative Corner » Diary of a prompter » Post 5
Added chapter 8. And thanks for your feedback so far. If even one person found this helpful, it was worth the time.
Creative Corner » Post Your AI Art! » Post 40
A recent one that I was pleased with how it came out.
Creative Corner » Show your AI video gens here! (SFW and NSFW are welcome!) » Post 4
Background Pony #B4E3
Creative Corner » Show your AI video gens here! (SFW and NSFW are welcome!) » Post 3
Background Pony #B4E3
More.
Creative Corner » need help finding AI Generator » Post 1
it’s going by WahtasticMerge now
Creative Corner » need help finding AI Generator » Topic Opener
I’ve been seeing a lot of Pandomerge Generated images lately and I want to try out that Generator but I can’t find it
Creative Corner » Diary of a prompter » Post 4
Thank you for sharing this. it’s very useful!
Creative Corner » Diary of a prompter » Post 3
Thanks! It’s great to see what I’ve been doing wrong and make some improvements!
Showing results 26 - 50 of 340 total
Default search
If you do not specify a field to search over, the search engine will search for posts with a body that is similar to the query's word stems. For example, posts containing the words winged humanization, wings, and spread wings would all be found by a search for wing, but sewing would not be.
Allowed fields
| Field Selector | Type | Description | Example |
|---|---|---|---|
author | Literal | Matches the author of this post. Anonymous authors will never match this term. | author:Joey |
body | Full Text | Matches the body of this post. This is the default field. | body:test |
created_at | Date/Time Range | Matches the creation time of this post. | created_at:2015 |
id | Numeric Range | Matches the numeric surrogate key for this post. | id:1000000 |
my | Meta | my:posts matches posts you have posted if you are signed in. | my:posts |
subject | Full Text | Matches the title of the topic. | subject:time wasting thread |
topic_id | Literal | Matches the numeric surrogate key for the topic this post belongs to. | topic_id:7000 |
topic_position | Numeric Range | Matches the offset from the beginning of the topic of this post. Positions begin at 0. | topic_position:0 |
updated_at | Date/Time Range | Matches the creation or last edit time of this post. | updated_at.gte:2 weeks ago |
user_id | Literal | Matches posts with the specified user_id. Anonymous users will never match this term. | user_id:211190 |
forum | Literal | Matches the short name for the forum this post belongs to. | forum:meta |
