Sign In

FLUX.1 Dev: Sampler + Scheduler Comparison

227

FLUX.1 Dev: Sampler + Scheduler Comparison

Testing the outputs of FLUX.1 Dev using the same prompt, fixed seed, steps and denoise values across all combinations of samplers and schedulers. That's 27 samplers and 7 schedulers for 189 images total.

***WORKFLOW IN ATTACHMENTS***

Here's a screenshot of my global settings.

Prompt was "cinematic 18k film still, a young woman wearing glowing butterfly wings. a futuristic blue neon sign centered over here face spells Paris. She is standing in a foggy, dimly lit french palace wearing a vintage gold sequin dress. Glowing light bloom."

Here's how I connected each sampler to all the schedulers. I kept steps to 25 and denoise to 1.0.

Then I just duplicated this into large blocks with some preview grids for quick comparisons.

Here's an example of a positive outcome (green with blue check):

And a negative outcome (red):

And here's an example of an image that would probably come together with higher steps or other fixes, but I considered it a failure anyway because it's pretty similar to other pairs that were clearly better quality (red).


That said, here's two pairs that had some artifacts but had a very different result than the rest of what I was getting. dpm_fast with sgm_uniform and simple (light green with ~check):

So there's some subjectivity to the images I considered rejects. I'd encourage you to see what results you get with your own prompts and settings!

227

Comments

My hero! Exactly what I needed!

My friend thanks you so much for taking the time in doing all this. You just save me in some research i was thinking in doing and i just found it here. keep the good work coming!

Really appreciate you doing the hard work on this. Much appreciated.

Did you happen to take any notes on speed of generation?

Am I the only one not able to get a good result with lcm sampler (regardless of scheduler)? I get "under plastic wrap" looking pictures. The rest of the table looks correct though, thanks!

try setting FluxGuidance between 2.0 and 3.0 to get rid of plastic wrap

this is great thank you!

Great idea and superbly realized :-) Thanks... which is the best combination in terms of sharpness and realism from your point of view? When I look at pictures taken with an early digital camera, they are often extremely noisy. sdxl upscaling based on Flux produces really cool results. Flux brings unprecedented results with appropriate denoising. Can you recommend a sampler/scheduler combination that would be optimal for this img2img application? :-)

Nice post!!

If you added a time-taken for each combination this graph would be godlike

Ahh. If I knew you'd done this I wouldn't do it myself earlier. πŸ˜†

You are a hero

Thank you!

do you have a workflow example that includes negative prompting and Scheduler selection? the workflows that i have encountered with the "sample custom advanced" node don't include a negative prompt, and i was able to make use of it with a custom node of "Xlabs samplers" (which not include the "Scheduler Selection" part for some reason), i kinda want to have both options at the same time though

Thanks for your research and saving other people time. Atm uni_pc sgm_uniform is my favorite,

thank you good sir!

Thanks for this great job, i began the same work with flux schnell gguf on my patatoe laptop, i gave up

[deleted]

Which sampler is used by Civitai? Because it feels like the images come better in Civitai compared to Forge

I've been wondering the same thing.

I feel the same

Thank you! This is great and saved on my desktop for easy access. That chart is so clear even a dumb comfy noob like me can understand it.

Thank you for providing this. I have had issues and always went back to Euler and Simple to get it working after thing I was on track for better results with a new combo.

Thank you a lot for this!

Hey guys, I noticed that when using dpm_adaptive and normal scheduler (but also with karras) the image is done after 2-4 steps. If I increase the steps with the same seed I just get the same image. I tried to compare them to Euler, normal or other (working) combinations, but I wouldn't see any improvements in using a larger number of steps

yeah I have notice this in my experience mostly with hyper models, once you go higher from the recommended step you mostly get same image, very similar or overcook image. I see good result up to 2 extra steps from the model creator recommendation and in some models I go as high as 20 steps and works just fine but is very rare.