Meta

Leading the Way in Governance Innovation With Community Forums on AI

By Jennifer Broxmeyer, Governance Director

Takeaways

  • We partnered with Stanford’s Deliberative Democracy Lab and the Behavioral Insights Team on a Community Forum to discuss the principles that can be reflected in generative AI chatbots – and today, we’re releasing the results.
  • We believe that openness in technology like AI leads to more innovation and better, safer products, which is why we’re publicly sharing the results of these Forums so more companies, researchers and governments can benefit from what participants shared.
  • We will run a similar Community Forum across several additional countries to continue generating feedback on the governing principles people want to see reflected in AI.

Meta believes that an open approach to technology can lead to more innovation, more accountability and better outcomes for everyone. That’s why we have pioneered innovative governance solutions, beginning in 2020 with the Oversight Board, which provides independent accountability of our content decisions, and expanding in 2022 to Community Forums including our recent Community Forum on Generative AI

These Community Forums bring together representative groups of people from all over the world to discuss and debate with each other and experts in the field, and then share their perspectives. We started off with a pilot on climate misinformation before running our first full Community Forum on bullying and harassment in the metaverse. This involved over 6,000 participants across 32 countries – what we think was the single largest deliberative exercise ever. In the last year, other platforms have been exploring deliberative mechanisms for AI governance, which is a positive development toward standardizing a more open approach in the industry.

Today, Stanford’s Deliberative Democracy Lab, the Behavioral Insights Team (BIT) and Meta are releasing the findings from the recent Community Forum on Generative AI that included over 1,500 people in Brazil, Germany, Spain and the United States. The Forum was designed to solicit public feedback to complement the input we receive from experts, academics and other stakeholders through our policy development processes. It covered two overarching questions around the types of principles that should be reflected in generative AI chatbots, including how they should interact with people and provide guidance and advice. We’re publicly sharing the results of these forums so more companies, researchers and governments can benefit from what participants shared.

Key Findings from the Community Forum

The Forum found that people are open to generative AI chatbots, but want to ensure developers are building with transparency and control features in mind. The full report shares more details about participants’ views on each proposal – here are some of the key takeaways:

  • A majority of participants from each country thought AI has had a positive impact, and this view increased through deliberations.
  • A majority believe AI chatbots should be able to use past conversations to improve responses, as long as people are informed. Agreement with this proposal rose as a result of the deliberations.
  • After deliberating, a majority of participants believed that AI chatbots can be human-like as long as people are informed.
  • Participants’ perspectives changed as a result of the deliberation, by as much as 20.5% on some questions, and often toward consensus on some of the Forum’s most pressing topics. This shows the process of deliberating with one another and engaging with experts helped participants to better understand the proposals and tradeoffs.

We design Community Forums to be principles-based with the ability to inform our long-term approaches, particularly given the fluid and fast-moving nature of technology. Our teams have been reviewing these findings as they can help plan for future work. 

Looking ahead, we will run a similar Community Forum on generative AI in other regions of the world to continue to get additional feedback from people on the principles that should guide approaches to AI from companies like ours, governments and society. We’re also considering additional methods to democratize inputs into our product development. We plan to share the results of these in the Transparency Center.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy