Take your AI agent from prototype to production

0
Full Days
0
Speakers
0
Sessions
0
Attendees
OVERVIEW

What is Fully Connected?

Fully Connected is your chance to join top AI engineers for two days of agent building and model training. We'll help you reach the next level in AI development, with experts from everywhere, hosted by Weights & Biases. Day 1 is hands-on workshops cutting across all domains of agent construction. Day 2 features talks by industry pioneers, sharing hard-earned insights across verticals and applications. You'll walk away with a whole new toolkit for building AI applications that matter.

On-Demand

Watch the Sessions

Catch up on all the sessions from Fully Connected

Weights & Biases and CoreWeave: Fully Connected 2025 Keynote

Join Weights & Biases Co-Founder Lukas Biewald and CoreWeave’s SVP of Engineering Chen Goldberg as they walk you through W&B’s newest releases for both W&B Models and Weave before showcasing the three features we’ve been working on together.

Building super intelligent tools

W&B Co-Founder Shawn Lewis built a programming agent that topped the SWE-Bench leaderboard for months. In this talk, Shawn talks about how his process, what he’s learned building agent experimentation loops, and how we’re bringing those lessons into Weights & Biases.

PyTorch: The open language of AI

PyTorch has become that central place bringing together diverse perspectives with a community collectively building a comprehensive framework that integrates all layers of AI development. Joe Spisak, a long time leader of PyTorch and Llama open source, talks about the next phase of PyTorch, the growing ecosystem around Foundation 2.0 and how PyTorch continues to evolve as the open language of AI.

AI’s $600B question: Scaling for what comes next

Join Sequoia’s David Cahn and Weights & Biases' Lavanya Shukla from Fully Connected 2025 in San Francisco, CA for a wide-ranging interview about AI investing in 2025 touching on everything from the AI talent wars to where open-source fits into an increasingly competitive AI landscape.

From research to reality in the age of (Gen)AI

Xavi Amatriain, VP of Product, AI at Google walks through how to translate cutting edge research into scalable, reliable product features. He touches on best practices for robust evaluation beyond traditional metrics, mitigating hallucinations, ensuring responsible AI development, and understanding the evolving role of data, and more.

Shipping smart agents: Lessons from the frontlines

Join Alex Laubscher from Fully Connected San Francisco 2025 behind the scenes as of the first deployed engineers at Windsurf to explore how the company is revolutionizing enterprise software development.

The open source AI compute tech stack: Kubernetes + Ray + PyTorch + vLLM

AI workloads require scale for both compute and data, and they require unprecedented heterogeneity. Common patterns are beginning to emerge to handle this complexity. Join Robert as he walks you through how Kubernetes, Ray, PyTorch, and vLLM work in tandem.

Building future-ready AI with agents & data flywheels: Insights from NVIDIA’s enterprise deployments

Santiago from NVIDIA shares insights, best practices, and lessons learned from building scalable, enterprise-ready AI agents using data flywheel. He walks through real deployments to show AI agents orchestrate LLMs, APIs, and workflows for automating multi-step tasks.

Efficient inference with Command A: Optimizing speed and cost for enterprise AI

In the enterprise AI landscape, balancing speed, cost, and performance is critical. This talk explores the innovative techniques behind Command A's efficient inference pipeline, designed to deliver high-quality results at a low cost.

Training video models at scale

Scaling generative video models poses unique challenges across architecture, data, optimization, and deployment. In this talk, we’ll explore the key decisions involved in building these models—from designing architectures that balance quality and efficiency, to curating diverse and temporally coherent datasets, to managing large-scale training and inference-time constraints. Drawing on a real-world experience, the talk will offer practical insights into what it takes to train and deploy high-quality generative video models at scale.

Fueling Innovation at Scale: Inside Pinterest's Machine Learning Platform

Explore how Pinterest's Machine Learning Platform drives innovation at scale, powering personalized experiences for millions of users worldwide. This talk offers a high-level overview of our approach to building ML platforms and the vibrant ecosystem that enables rapid experimentation and iteration of ML innovations. It will also highlight how Weights & Biases (Wandb) is seamlessly integrated into our ML lifecycle to support experiment tracking, model registry, and collaborative workflows. Discover how this integration streamlines our processes and empowers our teams to deliver state-of-the-art ML solutions at Pinterest's pace and scale.

From playground to production: Turbocharging GenAI innovation with AWS and Weights & Biases

Discover how the AWS and Weights & Biases partnership accelerates enterprise GenAI development from experimentation to production. This session highlights key technical integrations between W&B's MLOps platform and AWS services, including testing Bedrock models in W&B Playground, evaluating LLMs on Bedrock, and monitoring Bedrock Agents with W&B Weave. Learn how these seamless workflows help teams iterate faster, maintain governance, and deliver production-ready GenAI applications with confidence

Toward zero traffic accidents: How AI agents are revolutionizing automotive software development

To build safer automated driving systems, Toyota tested their software by driving thousands of hours of real-world on-road vehicle testing. Manually checking these driving videos was slow and costly, so we developed AutoTriage—a video AI agent that finds system errors and identifies their root causes. They share three key takeaways behind their success.

Label factory: LLMs for training small language classification models at scale

Social media content moderation at scale is a challenging task; beyond actually performing inference at scale, there are significant challenges to scaling up creation and evaluation of content moderation models. Learn how Zefr's Label Factory uses a knowledge-distillation approach to generate unsupervised classification labels for training purposes.

Beyond RAG: Production-ready AI agents powered by enterprise-scale data

Enterprise data isn’t always tidy—and AI agents need more than great retrieval to drive real value. In this session, Ryan shares what they’ve learned at Snowflake about enabling agents that deeply understand and reason over structured business data, allowing their users to reliably "talk to their data."

A T cell foundation model for AI-powered target discovery and precision medicine

Learn how ArsenalBio’s automation lab and discovery platform enable large scale data generation for training and validating AI models of T cells, a core cell type of the immune system with key roles in cancer, autoimmunity, and infection.

One size doesn’t fit all: Building AI agents specialized for your enterprise

Generic AI agents consistently fall short in complex enterprise environments. Discover how leading AI teams are moving beyond one-size-fits-all solutions to build AI agents purposefully designed for their use case, unlocking greater accuracy and real-world impact.

Synthetic data in medical device AI: Challenges and opportunities

SandboxAQ has developed a novel magnetocardiography (MCG) device for providing real-time decision support to cardiologists. Hailey and Geoff discuss how their team used large-scale synthetic data and deep transfer learning to address the data scarcity challenges associated with developing novel medical technologies.

Run the model, not the risk: Powering private inference for enterprise AI anywhere

Inference is the new frontier for enterprise AI—but using sensitive data securely is the bottleneck to production. In this talk, you'll get a technical overview, real-world examples, and integration guidance that keeps your stack intact. Whether you're building AI inside the enterprise or selling it, this session shows how to escape the infrastructure trap—and run the model, not the risk.

SPONSORS

Thanks to our sponsors