Skip to main content
The Keyword

AI

3 emerging practices for responsible generative AI



Google’s ongoing work in AI powers tools that billions of people use every day — including Google Search, Translate, Maps and more. Some of the work we’re most excited about involves using AI to solve major societal issues — from forecasting floods and cutting carbon to improving healthcare. We’ve learned that AI has the potential to have a far-reaching impact on the global crises facing everyone, while at the same expanding the benefits of existing innovations to people around the world.

This is why AI must be developed responsibly, in ways that address identifiable concerns like fairness, privacy and safety, with collaboration across the AI ecosystem. And it’s why — in the wake of announcing that we were an “AI-first” company in 2017 — we shared our AI Principles and have since built an extensive AI Principles governance structure and a scalable and repeatable ethics review process. To help others develop AI responsibly, we’ve also developed a growing Responsible AI toolkit.

Each year, we share a detailed report on our processes for risk assessments, ethics reviews and technical improvements in a publicly available annual update — 2019, 2020, 2021, 2022 — supplemented by a brief, midyear look at our own progress that covers what we’re seeing across the industry.

This year, generative AI is receiving more public focus, conversation and collaborative interest than any emerging technology in our lifetime. That’s a good thing. This collaborative spirit can only benefit the goal of AI’s responsible development on the road to unlocking its benefits, from helping small businesses create more compelling ad campaigns to enabling more people to prototype new AI applications, even without writing any code.

For our part, we’ve applied the AI Principles and an ethics review process to our own development of AI in our products — generative AI is no exception. What we’ve found in the past six months is that there are clear ways to promote safer, socially beneficial practices to generative AI concerns like unfair bias and factuality. We proactively integrate ethical considerations early in the design and development process and have significantly expanded our reviews of early-stage AI efforts, with a focus on guidance around generative AI projects.

For our midyear update, we’d like to share three of our best practices based on this guidance and what we’ve done in our pre-launch design, reviews and development of generative AI: design for responsibility, conduct adversarial testing and communicate simple, helpful explanations.

1. Design for responsibility.

It’s important to first identify and document potential harms and start the generative AI product development process with the use of responsible datasets, classifiers and filters to address those harms proactively. From that basis, we also:

  • Participate in workshops alongside the research community to identify comprehensive ways to build trustworthy AI. Recently, we’ve supported and helped advance forums like Ethical Considerations in Creative Applications of Computer Vision and Cross-Cultural Considerations in NLP.
  • Develop a prohibited use policy in advance of release, based on harms identified early in the research, development and ethics review process.
  • Use technical approaches such as classifiers and other tools to flag and filter outputs that violate policies, and additional methods such as those in the Responsible AI toolkit. Recently, we’ve added a new version of the Learning Interpretability Tool (LIT) to the toolkit, for model debugging and understanding, and the Monk Skin Tone Examples (MST-E) dataset to help AI practitioners use the inclusive Monk Skin Tone (MST) scale.
  • Gather a group of external experts across a variety of fields such as law and education for robust discussions on equitable product outcomes. Our ongoing Equitable AI Research Roundtable (EARR), for example, continues to meet with thought leaders who represent communities historically underrepresented in AI leadership positions, focusing on generative AI topics.
  • Offer an experimental, incremental release to trusted testers for feedback.
  • Proactively engage with policymakers, privacy regulators and global subject matter experts on an ongoing basis to inform wider releases, as we did before expanding Bard to 40 languages and international audiences.

2. Conduct adversarial testing.

Developers can stress-test generative AI models internally to identify and mitigate potential risks before launch and any ongoing releases. For example, with Bard, our experiment that lets people collaborate with generative AI, we tested for outputs that could be interpreted as person-like, which can lead to potentially harmful misunderstandings, and then created a safeguard by restricting Bard’s use of “I” statements to limit risk of inappropriate anthropomorphization we discovered during testing. We also:

  • Seek input from communities early in the research and development process to develop an understanding of societal contexts. This can help inform thorough stress testing. For example, we recently partnered with MLCommons and Kaggle to create Adversarial Nibbler, a public AI competition to crowdsource adversarial prompts to stress-test text-to-image models, with the goal of identifying unseen gaps, or “unknown unknowns,” in how image generation models are evaluated.
  • Test internally and inclusively. Before releasing Bard, we pulled from a group of hundreds of Googlers with a wide variety of backgrounds and cultural experiences, who volunteered to intentionally violate our policies to test the service. We continue to conduct these internal adversarial tests to inform Bard’s ongoing expansions and feature releases.
  • Adjust and apply adversarial security testing to address generative AI-specific concerns. For example, we’ve evolved our ongoing “red teaming” efforts — a stress-test approach that identifies vulnerabilities to attacks – to “ethically hack” our AI systems and support our new Secure AI Framework. We’ll expand the ethical hacking approach to generative AI even further, sharing a large language model for public red-teaming at this year’s DEFCON conference.

Related Article

Google's AI Red Team: the ethical hackers making AI safer

Today, we're publishing information on Google’s AI Red Team for the first time.

Read Article

3. Communicate simple, helpful explanations.

At launch, we seek to offer clear communication on when and how generative AI is used. We strive to show how people can offer feedback, and how they’re in control. For example, for Bard, some of our explainability practices included:

  • The ‘Google It’ button provides relevant Search queries to help users validate fact-based questions
  • Thumbs up and down icons as feedback channels
  • Links to report problems and offer operational support to ensure rapid response to user feedback
  • User control for storing or deleting Bard activity

We also strive to be clear with users when they are engaging with a new generative AI technology in the experimental phase. For example, Labs releases such as NotebookLM are labeled prominently with “Experiment,” along with specific details on what limited features are available during the early access period.

Another explainability practice is thorough documentation on how the generative AI service or product works. For Bard, this included a comprehensive overview offering clarity on the cap on the number of interactions to ensure quality, accuracy and prevent potential personification and other details on safety, and a privacy notice to help users understand how Bard handles their data.

Maintaining transparency is also key. We released a detailed technical report on PaLM 2, the model currently powering Bard, which includes information based on our internal documentation of evaluation details, and guidance for AI researchers and developers on the responsible use of the model.

In addition to the three observations above, we’re broadly focused on ensuring that new generative AI technologies have equally innovative guardrails when addressing concerns such as image provenance. Our efforts include watermarking images Google AI tools generate (as in Virtual Try On or Da Vinci Stickies) and offering image markups for publishers to indicate when an image is AI generated.

Being bold and being responsible are not at odds with each other — in fact, they go together in promoting the acceptance, adoption and helpfulness of new technologies. Earlier this month, we kicked off a public discussion inviting web publishers, civil society, academia and AI communities to offer thoughts on approaches to protocols to support the future development of the Internet in the age of generative AI. As we move ahead, we will continue to share how we apply emerging practices for responsible generative AI development and ongoing transparency with our annual, year-end AI Principles Progress Update.

Authors

Marian Croak

VP, Responsible AI and Human-Centered Technology
Read more from Marian

Jen Gennai

Director of Responsible Innovation
Read more from Jen

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe