Skip to main content

Governments Work Towards AI Regulation

The race to find the optimal way to regulate AI is just beginning.
Created on September 15|Last edited on September 15
The recent meeting between U.S. senators and tech leaders like Elon Musk and Mark Zuckerberg marked a pivotal moment in the ongoing discussion about AI regulation. Led by Senate Majority Leader Chuck Schumer, the forum aimed to bridge the gap between policymakers and industry experts to craft future AI laws. The discussion acknowledged the necessity of a government role in AI's development, including ethical and safety considerations, making it a foundational step toward bipartisan AI legislation.

What's Next?

However, despite this positive move, it remains unclear what the tangible next steps will be. There seems to be unanimous agreement among tech leaders that AI presents a serious threat requiring laser-focused attention. The critical question is how this general consensus will translate into actionable plans that balance innovation and safety.

EU Takes a Different Route with AI: A Focus on Startups and Governance

While the United States grapples with how to regulate artificial intelligence, the European Union is forging its own path. The EU recently announced an initiative aimed at granting AI startups faster access to high-performance computing (HPC) resources. Unveiled by EU Commission President Ursula von der Leyen during her annual 'State of the Union' address, this move could drastically speed up AI model training from months to weeks or even days.

The Plan and the Caveats

Under this initiative, the EU is planning to expand the reach of its eight supercomputers currently located in member states like Finland, Spain, and Italy. Future plans also include more powerful computers in Germany and France. While this initiative is a boon for small and medium-sized AI startups, there is a catch—companies must adhere to the EU's forthcoming framework on AI governance.

Governance and Regulation

The EU is concurrently working on formal regulations for AI through a risk-based framework known as the AI Act. The Act aims to set a global standard for AI governance, considering not just the economic aspects but also the ethical implications and potential existential risks posed by AI. Businesses and researchers will be encouraged to voluntarily commit to the principles of the AI Act even before it becomes official legislation.

Stakeholder Involvement

In a move that aims to be inclusive and holistic, an AI Alliance Assembly will convene in November. The goal is to involve all stakeholders in AI governance, extending the conversation beyond just big tech companies. This multi-stakeholder approach aims to build a robust, widely accepted framework for AI development and governance.

Iterations are Key

As AI technology grows, the need for governance becomes increasingly urgent. While the U.S. focuses on broad discussions involving tech giants and policymakers, the EU is taking a somewhat different approach by focusing on startups and governance models. Just like AI, governmental policies need multiple iterations to reach their 'optimal policy' . It is crucial for governments to iterate rapidly and effectively to find the right regulatory frameworks before AI becomes excessively problematic.
Private organizations like SpaceX, led by Elon Musk, have shown that rapid iterations and avoiding bureaucratic red tape can lead to innovative solutions at a pace that often outstrips traditional governmental organizations like NASA.
Hopefully, with the involvement of entrepreneurs and technical leaders, governments can learn from such models to keep up with the volatile landscape of AI.


References:



Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.