I look them up at lemmyverse.net
I go there about once a week to see if there are new communities I might be interested in. I’m on a selfhosted single-user instance, so my “all” is identical to my “subscribed” and this is how I populate my feed.
Yeah, reducing CFG can help a lot. It sometimes feels to me, that getting a good image is knowing at what point to let loose …
When I started I was just copying from online galleries like Civitai or Leonardo.ai, which gave me noticeable better images than what I have came up with myself before. However, it seemed to me that many of these images may also just have copied prompts without understanding what’s really going on with them and I started to experiment for myself.
What I will do right now is to build my images “from ground up” starting with super basic prompts like “a house on a lake” and work from there. First adding descriptions to get the image composition right, then work in the style I’m looking for (photography, digital artwork, cartoon, 3D render, …). Then I will work in enhancers and see what they change. I found that one has to be patient, only change one thing at a time and always do a couple of images (at least a batch of 8) to see if and what the changes are.
So, I still comb though image galleries for inspiration in prompting, but I will now most of the time just pick one keyword or enhancer and see what it does to my own images.
It is a long process that requires many iterations, but I find it really enjoyable.
I just figured out that I could drag any of my images, made with A1111, into the UI and it would set up the corresponding workflow automatically. I was under the impression that this would only work for images already created with ComfyUI first. However, this gives great starting points to work with. I will play around with it tonight and see if I can extract upscaling and control-net workflows with it as a starting point from existing images.
Do happen to have a tutorial for ComfyUI at hand, that you can link and that goes into some details? These custom workflows sound intriguing, but I’m not really sure where to start.
please do, I thinking to start making LoRa’s as well and the tool looks like it would make the process much easier. Let me know how it goes for you.
The prompt was just an example and usually my prompts get quite a bit longer than that. But in 1.5 models I manage to get what I want to see eventually. I also find that throwing in qualifiers like “mesmerizing” does do something to the image, although in can be subtle.
However, what I wanted to say here was that in SDXL my prompting seems to go to nowhere and I feel I’m not able to get out the kind of image I have in my head. Keeping the prompt example, in SD1.5 using a custom model like Deliberate 2.0 I’m able to end up with an image of a hat wearing cat surround by surreal looking candy pops. (however the final prompt for this reads). In SDXL my images “break” (i.e start looking flat, unrefined or even bizarre) at some point long before I can direct them towards my imagined result. All my usual approaches like reducing CFG, re-ordering prompts, using a variety of qualifierts don’t seem to work like I’m used to.
And tbh, I think this has to be expected. These are new models, so we need new toools (prompts) to work with them. I just haven’t learned how to do it yet and I’m asking how others do it :)