DALL·E 2
DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language.
![](https://cdn.openai.com/dall-e-2/assets/heroes/8.jpg)
![](https://cdn.openai.com/dall-e-2/assets/heroes/23.jpg)
![](https://cdn.openai.com/dall-e-2/assets/heroes/39.jpg)
![](https://cdn.openai.com/dall-e-2/assets/heroes/42.jpg)
![](https://cdn.openai.com/dall-e-2/assets/heroes/86.jpg)
![](https://cdn.openai.com/dall-e-2/assets/heroes/112.jpg)
DALL·E 2 can create original, realistic images and art from a
text description. It can combine concepts, attributes, and styles.
DALL·E 2 can make realistic edits to existing images from a natural language caption. It can add and remove elements while taking shadows, reflections, and textures into account.
DALL·E 2 can take an image and create different
variations of it inspired by the original.
DALL·E 2 has learned the relationship between images and the text used to describe them. It uses a process called “diffusion,” which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.
In January 2021, OpenAI introduced DALL·E. One year later, our newest system, DALL·E 2, generates more realistic and accurate images with 4x greater resolution.
fox sitting in a
field at sunrise
in the style of
Claude Monet”
DALL·E 2 is preferred over DALL·E 1 for its caption matching and photorealism when evaluators were asked to compare 1,000 image generations from each model.
preferred for
caption matching
preferred for
photorealism
DALL·E 2 is a research project which we currently do not make available in our API. As part of our effort to develop and deploy AI responsibly, we are studying DALL·E’s limitations and capabilities with a select group of users. Safety mitigations we have already developed include:
We’ve limited the ability for DALL·E 2 to generate violent, hate, or adult images. By removing the most explicit content from the training data, we minimized DALL·E 2’s exposure to these concepts. We also used advanced techniques to prevent photorealistic generations of real individuals’ faces, including those of public figures.
Our content policy does not allow users to generate violent, adult, or political content, among other categories. We won’t generate images if our filters identify text prompts and image uploads that may violate our policies. We also have automated and human monitoring systems to guard against misuse.
We’ve been working with external experts and are previewing DALL·E 2 to a limited number of trusted users who will help us learn about the technology’s capabilities and limitations. We plan to invite more people to preview this research over time as we learn and iteratively improve our safety system.
Our hope is that DALL·E 2 will empower people to express themselves creatively. DALL·E 2 also helps us understand how advanced AI systems see and understand our world, which is critical to our mission of creating AI that benefits humanity.