OpenAI Releases ChatGPT-Powered DALL-E 3


The Gist

  • DALL-E 3 is here. The latest version of the AI image generator offers seamless integration with ChatGPT. 
  • Enhanced accuracy. DALL-E 3 understands context to create exceptionally accurate images.
  • New restrictions. The platform limits the creation of inappropriate content and adds features to respect creators’ rights and preferences. 

OpenAI recently unveiled DALL-E 3, the third version of its generative AI text-to-image platform. What’s new? DALL-E 3 is built natively on ChatGPT, meaning users can use ChatGPT to help brainstorm, create and refine prompts for images. 

A demonstration of DALL-E 3 shows that when prompted with an idea — whether a sentence or a few words — ChatGPT automatically comes up with four detailed prompts, which DALL-E 3 then visualizes. If you want to make changes to the images, all you have to do is ask ChatGPT to adjust the prompt. 

“Modern text-to-image systems have a tendency to ignore words or descriptions, forcing users to learn prompt engineering. DALL-E 3 represents a leap forward in our ability to generate images that exactly adhere to the text you provide,” said OpenAI.

What’s exciting about the platform is the true integration between text and graphics. Users can request a prompt for a visual, and then use that visual to request ChatGPT write a relevant story or poem. Iterations are simple and take seconds. And, for purists, it’s still possible to create prompts without ChatGPT’s help. 

Related Article: What Is ChatGPT? Everything You Need to Know

The Evolution of DALL-E 

OpenAI released the original DALL-E, a platform that uses generative artificial intelligence to convert text into images, in January 2021. A year later came DALL-E 2, which OpenAI claimed generated more realistic and accurate images with four times greater resolution. 

Both platforms have their flaws with accuracy in that they don’t always complete every request in a prompt — something I discovered when I tested DALL-E 2 out against Midjourney and Stability AI’s Stable Diffusion. 

For example, I gave DALL-E the prompt: Alien planet with two moons and exotic, brightly colored vegetation. The result: Some of the images generated failed to include bright vegetation or the correct number of moons. 

DALL-E 3, however, understands context better, according to OpenAI. “DALL-E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.” 


Source link