Webinar

Navigating the EU AI Act: Compliance through governance and observability

Event Overview
The European Union’s AI Act establishes a comprehensive framework to ensure the safe and ethical deployment of AI across industries. However, its requirements for strong governance, documentation, and transparency can be difficult to navigate. Discover how global organizations are turning regulatory compliance into an advantage, driving innovation while maintaining global AI safety and ethics standards. Join our 3-part series of sessions designed to give strategies and recommendations to help AI teams meet the rigorous standards required by the EU AI Act.
Part 1: Agents under the EU AI Act: balancing opportunity, risk, and compliance
June 24, 12:00 BST

The rapid rise of AI agents—autonomous systems that act on behalf of users—offers enormous potential for productivity and personalisation. Especially, for complicated and high-risk use-cases they can add essential value. Yet, under the newly enacted EU AI Act, high-risk use-cases impose stringent requirements for governance, documentation, and transparency. In this session, you’ll learn how to turn these compliance challenges into a strategic advantage by setting clear roles and processes, mapping legal articles to engineering tasks, and leveraging Weights & Biases to unite technical and non-technical teams.

What to expect:

  • A two-fold look at AI agents: the big opportunities—and the high-risk compliance hurdles—under the EU AI Act
  • High-level walkthrough of our hiring-agent case study (fine-tuning + agentic workflows) aligned with the Act’s high-risk articles
  • Practical frameworks for defining team roles, governance handoffs, and rapid auditability

What you will learn:

  • How Articles 5–15 of the EU AI Act apply to agentic systems and map to concrete engineering and reporting tasks
  • Strategies for structuring fast, compliant development without slowing innovation
  • How Weights & Biases centralizes model provenance, documentation, and reporting for both engineers and compliance officers

Who should attend:

  • AI developers and ML engineers working on agents
  • Engineering managers and technical leaders tasked with establishing compliant processes and reporting
  • Governance and business managers tasked with implementing strategies and processes for compliance
Part 2: Roles, processes, and platforms to build a compliant hiring agent
July 8, 12:00 BST

Building on our introduction to agents under the EU AI Act, this session dives into the “who” and “how.” We’ll recap the Act’s timeline and high-risk requirements, then share a detailed blueprint for coordinating compliance across governance officers, ML engineers, product managers, legal, and more. See how MLOps tools like Weights & Biases standardize workflows, automate evidence generation, and enforce validation gates. We’ll finish with a live demo of our hiring-agent pipeline and the dynamic W&B report that doubles as your compliance dossier.

What to expect:

  • Quick refresher on EU AI Act milestones and the engineering/governance challenges in our hiring-agent prototype
  • Concrete role definitions, handoffs, and risk-mitigation strategies for cross-functional teams
  • Exploration of W&B Models and Weave to automate reporting, enforce standards, and reduce manual documentation
  • Live demo: end-to-end hiring-agent pipeline and dynamic compliance report in Weights & Biases

What you will learn:

  • How to translate EU AI Act requirements into specific tasks for each stakeholder group
  • Common risks in agent-based hiring (e.g., bias, hallucination) and the technical mitigation techniques we applied
  • Configuration of W&B workflows and reports to centralize provenance and lock in approval gates
  • How to adapt our demo repo as a template for your own high-risk AI use cases

Who should attend:

  • Compliance & Governance Officers
  • Engineering Managers
  • AI/ML Engineers
  • Product & Business Managers
Part 3: Engineering a compliant hiring agent: A technical deep dive
July 15, 12:00 BST

In our final installment, we go under the hood. Learn how to technically build, test, and monitor a compliant hiring agent using W&B Weave. We’ll demonstrate two complementary risk-mitigation techniques—offline benchmarking before deployment and real-time guardrails in production—and show you how to capture both quantitative metrics and qualitative traces through granular monitoring. See how to tie fine-tuning, model training, system engineering, and application code into a unified, auditable pipeline.

What to expect:

  • A deep-dive, 30-minute walkthrough of our agentic pipeline in Weave, building on Parts 1 & 2
  • Two-layer risk mitigation: offline benchmarks to stress-test decisions and online guardrails to enforce policies at runtime
  • Demonstration of trace collection in W&B: dashboards for performance and bias, plus conversational logs for qualitative audits
  • Integration patterns for unifying fine-tuning (W&B Models), Weave workflows, and application components

What you will learn:

  • How to design and run offline benchmarks that validate agent behavior against compliance criteria
  • Configuration of W&B Weave guardrails and validation hooks to catch violations in real time
  • Best practices for capturing, visualizing, and triaging both quantitative metrics and qualitative traces
  • How to architect a seamless, end-to-end pipeline that delivers a living compliance audit trail

Who should attend:

  • AI/ML Engineers
  • MLOps Engineers
  • Engineering Managers
  • Governance Officers

Featured
speakers

Nicolas
Remerscheid
AI Solutions Engineer
Weights & Biases
Alex Machado, AppliedAI
Alexander
Machado
Head of Trustworthy AI CoE
AppliedAI
Co-hosted with