Skip to main content

New Improvements To DALL·E 2 Greatly Reduce Bias

DALL·E 2 has seen improvements for bias mitigation as well and improvements to the safety systems which ensure safe utilization of the model.
Created on July 19|Last edited on July 19
Thanks to the time spent in limited preview so far, the researchers and engineers behind DALL·E 2 have been working to improve the model based on user feedback and suggestions. They have implemented a new technique into DALL·E 2 that shows impressive results for mitigating bias in generated images alongside some new improvements to safety systems which ensure DALL·E 2 is used responsibly.

Prior to the new improvements implemented into DALL·E 2, prompts such as "A photo of a CEO" or "A portrait of a teacher" would often produce stereotypical representations of the occupation. With these new improvements, the generated images more frequently depict a much broader range of diversity for the subjects.

Beyond model changes for improved bias mitigation, they have also been improving the safety systems which work to make sure users are interacting with DALL·E 2 in a safe and productive manner. They have been refining the systems which monitor the misuse of DALL·E 2, with more accurate filtering systems and the rejection of image uploads depicting specific people such as political figures.
In the aim of continued model refinement, improvement, and overall safe use of DALL·E 2, improvements will continue to be made as the project becomes available to more people. The team says that improvements like these give them the confidence to grant access to more people.

Find out more

Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.