The model used was a 50% merge of Protogen x3.4 and Anything-3.0. Believe it or not, but the "aroused" and "horny" keywords are there for the facial expression. "Dreamlikeart" was included by mistake when I used an old prompt with a different model (the previous model I used was a 50% merge of Dreamlike and Anything-v3).
Just to be clear: no it's not Kagome from Inuyasha and if you watched the show you'd know. Stable Diffusion is not great at re-creating established characters (which indeed sucks)
@antlers_anon are you sure there's no hypernetwork you've selected in your settings tab, or TI that's being used to generate this image?
I tried to generate this with your provided mix__2.ckpt with all of the exact same settings (prompt, sampler, CFG scale, seed) and it still results in a fairly realistic non-anime style.
If nothing else, would you be willing to share your embeddings folder and hypernetworks folder? (just a screenshot would probably do as well?)
edit: looks like the firstpass width/height was set to 0x0, so it was just straight-up going for 768x1280 for initial resolution. I ended up finding that it gets pretty close, especially if I use kl-f8-anime2.vae instead of novelai/anythingv3 vae.
I'm guessing that colour correction and some filtering adds a little noise
I was gonna make a meme comment, but I feel like being helpful. :u
Being a Builder gives you access to a few additional features regular users don't have, like using the mode menu when viewing posts (for quick faving/unfaving, and using tag scripts) and you can give other people feedback, to name two of them... and since the Gold and Platinum levels are... sort of used, I guess (but not really?) Builder is kind of the default level that people get promoted to if they're active in uploading and tagging stuff, I suppose.
Oh I know that now. But this pic is actually an accurate representation of how I felt when the message came in. I was like, "Oh cool! I'm a builder! ....... what's a builder?"
I was gonna make a meme comment, but I feel like being helpful. :u
Being a Builder gives you access to a few additional features regular users don't have, like using the mode menu when viewing posts (for quick faving/unfaving, and using tag scripts) and you can give other people feedback, to name two of them... and since the Gold and Platinum levels are... sort of used, I guess (but not really?) Builder is kind of the default level that people get promoted to if they're active in uploading and tagging stuff, I suppose.
I'd love to know what hypernet was used with this!
I'm almost entirely sure I didn't use anything more than the mentioned model. Did you try using it with the prompt? I can try generating it again to see if I didn't mess up while copying the settings. I'll report back once I'm near my pc.
@SomeCoolUsername Sorry to disappoint, but I don't really have them. I use this to generate, and none of the metadata gets saved automatically. I only fill out what I can know for sure.
The only information that I didn't add to the metadata field that I had access to is the Guidance Scale, since I'm not sure if it's the same thing as Cfg Scale. I tend to go with either 8.5 or 9, I think this one was a 9.
Here's a site with a bunch of models to choose from: https://rentry.org/sdmodels It really depends on what you're going for, and in what style. For me I haven't experimented with many models, but I know that gape NovelAI is better for lewds, and Anything is just great overall, but doesn't do amazing with lewds. Hopefully that helps~
Hey this is actually a problem I've been having. Thanks for mentioning it. I just learned I need to put it in the VAE folder to make it show up on the list.
btw, have you tried using --no-half-vae? it helped me to get rid of black pictures when generating using novelai, anything, etc.
I wouldn't recommend using the Anything VAE since it would cause some images to be black. Most of the time they would be fine but once every, say, 50-60 images I would get a completely black square.
Switching to vae-ft-ema-560000-ema-pruned.ckpt fixed the issue for me.
Hey this is actually a problem I've been having. Thanks for mentioning it. I just learned I need to put it in the VAE folder to make it show up on the list.
I wouldn't recommend using the Anything VAE since it would cause some images to be black. Most of the time they would be fine but once every, say, 50-60 images I would get a completely black square.
Switching to vae-ft-ema-560000-ema-pruned.ckpt fixed the issue for me.
Interesting. I made the exact same model, with an identical hash but with absolutely identical settings and prompt I get not even close to the same art as in the post.
Also, there seems to be some problem with the colors
I can't get the result you're getting. Even though hashes match, models might differ due to the weird way they are calculated. In my experience the hash stays the same no matter what weight you use with the add difference method. So either one of us might have made a mistake there (wouldn't be the first time I messed up writing instructions for a mix). You can download the model I'm using from https://mega.nz/folder/XMUzWIAL#i52o1QYOx7j1neujUJzfWw as mix__6.
I'm also using the latent upscaler for the highres fix. You probably won't get the exact same image because of my --xformers but you should get close.
As for the colors, I think used the vae-ft-mse-840000 one. As redjoe said, it should fix your color problems.
Recently seen some models mention a 'needed' use of different clip skip variables and I wanted to know more about them. Happened to come across this example, thanks ๐
Glad I could help. If you're going to be working with CLIP a lot, I recommend adding it to your main interface. Go into WebUI settings| User interace| Quicksettings list: Add CLIP_stop_at_last_layers (put a comma between each argument here). Refresh and it'll be up top next to your model selector.
Looks great! Would you be willing to share the embedding? How many images did you use to train it? I want to train a Yoimiya embedding, and I selected around 70 images. I wonder if that's enough.
Recently seen some models mention a 'needed' use of different clip skip variables and I wanted to know more about them. Happened to come across this example, thanks ๐
Interesting. I made the exact same model, with an identical hash but with absolutely identical settings and prompt I get not even close to the same art as in the post.
Also, there seems to be some problem with the colors
This is a problem known as "bruising" (the little purple spots here and there). To fix it, go to WebUI settings| Stable Diffusion tab| SD Vae. Set it to anything-v3.0.vae or nai.vae (I'm pretty sure these are identical). I have no idea if this will make your image identical to AA's, but it will fix the bruising and desaturation.
Interesting. I made the exact same model, with an identical hash but with absolutely identical settings and prompt I get not even close to the same art as in the post.
Also, there seems to be some problem with the colors