The rapid rise of AI agents—autonomous systems that act on behalf of users—offers enormous potential for productivity and personalisation. Especially, for complicated and high-risk use-cases they can add essential value. Yet, under the newly enacted EU AI Act, high-risk use-cases impose stringent requirements for governance, documentation, and transparency. In this session, you’ll learn how to turn these compliance challenges into a strategic advantage by setting clear roles and processes, mapping legal articles to engineering tasks, and leveraging Weights & Biases to unite technical and non-technical teams.
What to expect:
What you will learn:
Who should attend:
Building on our introduction to agents under the EU AI Act, this session dives into the “who” and “how.” We’ll recap the Act’s timeline and high-risk requirements, then share a detailed blueprint for coordinating compliance across governance officers, ML engineers, product managers, legal, and more. See how MLOps tools like Weights & Biases standardize workflows, automate evidence generation, and enforce validation gates. We’ll finish with a live demo of our hiring-agent pipeline and the dynamic W&B report that doubles as your compliance dossier.
What to expect:
What you will learn:
Who should attend:
In our final installment, we go under the hood. Learn how to technically build, test, and monitor a compliant hiring agent using W&B Weave. We’ll demonstrate two complementary risk-mitigation techniques—offline benchmarking before deployment and real-time guardrails in production—and show you how to capture both quantitative metrics and qualitative traces through granular monitoring. See how to tie fine-tuning, model training, system engineering, and application code into a unified, auditable pipeline.
What to expect:
What you will learn:
Who should attend:
Copyright © Weights & Biases. All rights reserved.