1 Comment

"I suspect the model's capabilities stem primarily from its multi-faceted training approach. Each image in the dataset was paired with both original captions (sourced from alt text and human descriptions) and synthetic captions generated using Gemini models with varied prompts."

This was exactly the approach OpenAI followed when they first released DALL-E 3 - at the time, the only model that could reliably spell short words/phrases and was top-tier in prompt adherence and instruction following. I broke down that research paper in November 2023: https://www.whytryai.com/p/dall-e-3-better-captions-research-paper-summary

It's a testament to how quickly things change that DALL-E 3 now feels obsolete and outshined by so many other models like FLUX, Recraft, Ideogram, Imagen, etc. when it comes to prompt following.

And I agree:: Aesthetics seems a largely solved issue. Accurate and consistent instruction following is the next battleground.

Expand full comment