dpo-sdxl

dpo-sdxl

Direct Preference Optimization (DPO) is a method to align diffusion models to text human preferences by directly optimizing on human comparison data

Try it now

dpo-sdxl
June 11, 2024

Reviews

No reviews yet. Be the first.

What do you think about this AI tool?

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Built by you, powered by Scade

Sign up free

Subscribe to weekly digest

Stay ahead with weekly updates: get platform news, explore projects, discover updates, and dive into case studies and feature breakdowns.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.