Meta

Bringing People Together to Inform Decision-Making on Generative AI

By Nick Clegg, President, Global Affairs

Takeaways

  • With so much hyperbole surrounding the debate on AI, it’s important for tech companies to be open about the work they’re doing and find new ways to bring people into the process.
  • That’s why we are launching a Community Forum on Generative AI aimed at producing feedback on the principles people want to see reflected in new AI technologies.
  • The Community Forum will be held in consultation with Stanford Deliberative Democracy Lab and the Behavioural Insights Team, and is consistent with our open collaboration approach to sharing AI models.

Few things have captured the public imagination more in recent months than the significant breakthroughs we’ve seen in Artificial Intelligence, and generative AI in particular. These are the products and services that allow people to generate text, audio and visual content with simple prompts. The explosion of interest in these powerful new technologies has led to both huge excitement about the possibilities they create and significant concerns about the risks.

For most people, it’s impossible to know what to think when confronted by new technologies that inspire such hyperbolic optimism and pessimism. The only solution is openness from those developing these technologies. That means tech companies need to be more transparent about the technologies they are building, they need to work in collaboration with others in industry, government, civil society and academia to develop them in a responsible way, and they need to find ways to bring people into the process as they develop the guardrails that AI systems operate within.

An open approach not only means society will have a greater understanding of what these technologies are capable of – and what they’re not – it also gives people the opportunity to help shape their development and ensure that a diverse range of viewpoints and experiences are baked into the decisions companies make as they build them.

We think it’s important that our product and policy decisions around generative AI are informed by people and experts from around the world. We’ve talked to dozens of academics, researchers, and community leaders about how to think about generative AI content, and we’re also actively working with the Partnership on AI, which we joined in 2016 as a founding member, to help ensure that we and others have the right ethical boundaries for constructing these products.

To bring more people into this process, we need to innovate with new approaches to governance and decision-making. That’s why we are announcing the launch of a Community Forum designed to generate feedback on the governing principles people want to see reflected in new AI technologies. Our Community Forum on Generative AI will be held in consultation with Stanford Deliberative Democracy Lab and the Behavioural Insights Team (BIT). Both organizations were partners on our launch of our Community Forum pilots last year.

What Are Community Forums?

Community Forums bring people together to discuss tough issues, consider hard choices and share recommendations for improving people’s experiences across our apps. The model is based on deliberative polling, an approach that has been used by governments around the world, in which representative groups have the opportunity to learn about complex issues before sharing their perspectives. This differs substantially from more typical user experience surveys, where people are polled for their impressions but don’t necessarily have familiarity with the subject matter.

Participants have access to extensive educational materials, deliberate multiple times in small groups, and have the opportunity to ask questions of leading independent experts about the concepts discussed. This helps them engage more deeply with complex issues like generative AI, and leads to more considered, nuanced and rich results. And it helps Meta to take on board a wide range of diverse perspectives from our users as we build our systems and products. 

We believe generative AI is well suited to this method of decision-making. AI models are guided by the data they have access to, as well as the structures and inputs we create in building their infrastructure. Meta and companies like ours input “values” which guide the AI model and can help protect against bias and unintended consequences by giving it a way to evaluate its own outputs. We think it’s important these values are reflective of different viewpoints from throughout society, and the Community Forum will build on the feedback we’ve received from experts as part of our policy development processes. 

Meta didn’t invent deliberative democracy mechanisms. They’ve been used by governments and organizations around the world for decades to successfully answer difficult questions: from amending the constitution in Ireland, to addressing environmental disasters and population pressures in Uganda, and changing the election system in parts of Canada.

As Meta refines the best ways to develop generative AI systems, this Community Forum will give us insights into how people would like models to behave for nuanced topics, and therefore inform future product and policy considerations. Participants will explore the principles a diverse range of users from around the world believe generative AI systems should align with. These principles could then help inform our systems. Our hope is that the forum will not only help inform our own modeling infrastructure, but will also provide thoughtful, nuanced insights for the broader industry.

Today, Stanford is also publishing the output of our Metaverse Community Forum, which took place in December.  This forum not only gave us valuable insights, it demonstrated people’s willingness to engage in nuanced and complex issues, and confirmed that this methodology can be successful globally.  

An Open Approach to Developing New Technologies

Community Forums are part of a wider ethos of transparency, accountability and collaboration. We will publicly release the findings so other technology companies, academics, and society can benefit from the democratic insights gleaned from participants. Generally speaking, we believe that as powerful new technologies like generative AI are developed, companies need to be more open about the way their systems work. For example, we’re a founding member of Partnership on AI, and announced last week that we are participating in its Framework for Collective Action on Synthetic Media, an important step in ensuring responsible guardrails are established around AI-generated content. And it’s also why we’re pioneering innovative new models of governance. As well as Community Forums, we established the Oversight Board in 2020 – an independent and expert-led body that operates as a check and balance on Meta’s content decisions. 

We also believe an open approach to research and innovation – especially when it comes to transformative AI technologies – is better than leaving the know-how in the hands of a small number of big tech companies. That’s why we’ve released over 1,000 AI models, libraries and data sets for researchers over the last decade so they can benefit from our computing power and pursue research openly and safely. Openness is also crucial to ensuring developers, entrepreneurs and startups can use foundation models built by big companies to create new tools and to enhance the safety of AI through continuous and open innovation.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy