OpenAI's Democratic Inputs to AI Grant Program Updates
OpenAI shares progress on their Democratic Inputs to AI Grant Program
Created on January 16|Last edited on January 16
Comment
In a significant update to the Democratic Inputs to AI grant program, OpenAI has shared valuable insights and future plans following the program's success. Launched in May, the initiative aimed to integrate public opinion into AI development, acknowledging the importance of aligning AI with human values as the technology becomes increasingly prevalent.
The Objective
Out of nearly 1000 applicants, 10 teams were awarded a share of $100,000 to develop and test ideas utilizing democratic methods to govern AI systems. These teams faced challenges such as engaging a diverse group of participants, ensuring their outputs represented varied opinions, and maintaining transparency to gain public trust.
The selected teams, hailing from 12 different countries, brought varied expertise to the table, including law, journalism, and machine learning. Their projects involved innovative democratic technologies like video deliberation interfaces, crowdsourced AI model audits, and methods to map beliefs for model fine-tuning. AI played a crucial role in these processes, aiding in communication, transcription, and data analysis.
Inclusion
OpenAI has committed to building on this momentum by creating a comprehensive process for incorporating external inputs into AI model training and behavior. The company plans to integrate the research and prototypes developed by the grant recipients. Additionally, OpenAI shared the code from the program and summarized each team's contributions, highlighting the program's collaborative and transparent approach.
This initiative marks a significant step in democratizing AI development, ensuring that diverse public opinions shape the future of AI technology. OpenAI invites researchers and engineers to join them in continuing this innovative work.
Some of the Teams
Case Law for AI Policy
The "Case Law for AI Policy" project, led by Quan Ze (Jim) Chen and his team, aimed to develop a robust case repository for AI interactions. This repository is designed to inform AI decisions in a manner similar to case-law judgments, incorporating insights from experts, the general public, and key stakeholders. The process involved experts brainstorming around policy questions to identify key dimensions, followed by the public providing their stances on these scenarios. Subsequently, stakeholders set precedents for specific domains. The final stage involved training an AI model on this case repository, enabling it to make informed decisions on new cases.
Collective Dialogues for Democratic Policy Development
Under the guidance of Andrew Konya and colleagues, the "Collective Dialogues for Democratic Policy Development" project focused on creating AI policies that reflect the informed will of the public. This was achieved through collective dialogues, which helped scale democratic deliberation and find areas of consensus. The process began with AI-supported dialogues to understand public views on policy issues, followed by the creation of initial policy drafts. These drafts were then refined through expert input and further public refinement, culminating in a final evaluation of public support for the proposed policies.
Democratic Fine-Tuning
The "Democratic Fine-Tuning" initiative, led by Joe Edelman and his team, was dedicated to eliciting values from participants in chat dialogues to create a moral graph. This graph would then be used to fine-tune AI models. The team's approach involved selecting contentious questions, engaging participants in dialogues to generate 'values cards' summarizing important considerations, and then voting on these cards. The process also included generating stories of values transitions to identify 'values upgrades', ultimately leading to the creation of a moral graph that guides the fine-tuning of AI models.
Energize AI: Aligned - a Platform for Alignment
Led by Ethan Shaotran, Ido Pesok, and Sam Jones, "Energize AI: Aligned" is a project designed to align AI models with democratic inputs and governance. The team focuses on developing a set of guidelines through live, large-scale community participation, using a 'community notes' algorithm. The process begins with the collection of inputs, where community members propose and assess new guidelines for AI behavior. These guidelines are then tested for practicality in steering AI behavior. The team employs the 'community notes' algorithm to identify guidelines that receive diverse support, ensuring democratic approval. The output is a continuously updated, transparent constitution of AI guidelines, backed by public support metrics. The final stage involves refining these guidelines with broader community inputs, enhancing the diversity and alignment of the AI models.
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.