Skip to main content

Googles New AI Regulation Recommendations

Google voices their ideas in regards to AI regulation, sharing twelve main recommendations that aim to provide a balanced means of governing AI applications.
Created on April 17|Last edited on April 18
Numerous prominent tech leaders, including Elon Musk and Steve Wozniak, have recently signed a petition advocating for a temporary six-month ban on developing advanced AI technology. This has sparked significant controversy, as it can be argued many of those in favor of the petition may have direct conflicts of interest.
In response to the growing discussions around AI regulation, Google has released a document outlining their recommendations for a balanced and effective approach to governing AI applications.

What Are Google's AI Recommendations?

Here are the main recommendations:
  1. Take a sectoral approach that builds on existing regulation
  2. Adopt a proportionate, risk-based framework
  3. Promote an interoperable approach to AI standards and governance
  4. Ensure parity in expectations between non-AI and AI systems
  5. Recognize that transparency is a means to an end
  6. Clarify expectations for conducting risk assessments
  7. Take a pragmatic approach to setting disclosure standards
  8. Workable standards for explainability and reproducibility require compromise
  9. Ex-ante auditing should centre on processes
  10. Ensure fairness benchmarks are pragmatic and reflect the wider context
  11. Prioritize robustness but tailor expectations to the context
  12. Be wary of over-reliance on human oversight as a solution to AI issues
The overarching theme of these guidelines were mainly focused on flexibility when designing legislation. If you are interested in the details of these recommendations, we’ve linked the full document below. We’ll go into a few of the recommendations that seemed most interesting.

Adopt a proportionate, risk-based framework

Google notes that a balanced approach is needed, weighing potential harms against AI's social and economic benefits, and acknowledging that if an AI system outperforms existing methods in critical tasks, it may be irresponsible not to use it. The document asserts that regulatory frameworks should not discourage AI's net beneficial use. Governmental regulation has a tendency to be somewhat rigid, especially when weighing risk and reward, so hopefully this recommendation will be followed.

Ensure parity in expectations between non-AI and AI systems

Somewhat related to the last recommendation, Google points out that the benchmark for AI should be the performance of comparable processes or human-powered alternatives. They point out that there’s a risk that demanding higher standards for AI than non-AI approaches could hinder innovation, often due to hidden flaws in existing decisions or biases towards human errors. Governments have done well so far in regulating self-driving cars by focusing on safety and performance standards while understanding that AI can offer significant advantages over human-driven vehicles, so hopefully this trend will continue.

Take a pragmatic approach to setting disclosure standards

They point out the need to balance transparency with the tradeoffs and challenges in providing detailed information about AI models, along with three general principles for disclosure: 1) the deploying organization should be responsible for disclosure and documentation, not third-party suppliers; 2) AI's role in decision-making or interactions should be easily discoverable, especially when it may not be expected; and 3) disclosures should be clear and meaningful to a wide audience while also providing additional technical information for expert users and reviewers when appropriate. This seems like a great suggestion, as it would allow for some clarity over why AI systems behave in certain ways, and also help researchers find important problems to work on.

Player Playing Referee?

The overarching theme of these recommendations is to establish a context-sensitive and risk-based approach to AI regulation that leverages existing frameworks that acknowledges the potential benefits and limitations of AI systems without imposing excessive constraints or unrealistic expectations. While the document provides thoughtful recommendations for AI regulation, it is essential to consider that Google is a major player in the AI industry and may have biases that could influence their suggestions.
Their recommendations could potentially contain hidden principles that, while seemingly beneficial for AI regulation as a whole, may ultimately serve their business interests. As a result, policymakers should critically evaluate Google's recommendations and consider a wide range of perspectives from various other perspectives to ensure a comprehensive and fair approach to AI regulation.






Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.