Aleph Alpha Builds Europe’s Most Advanced LLMs with W&B

"W&B gives us a concise look at all projects. We can compare runs, aggregate them all in one place and intuitively decide what works well and what to try next."
Samuel Weinbach
VP of Technology

Accelerating AI in the EU

While most of the leading large language model developers are based in the US, there is at least one promising company in Germany that can rival the West. That’s right, we’re talking about Aleph Alpha.
 
The Heidelberg-based startup has developed its own family of large language models. In Aleph Alpha’s latest benchmark results, they came to the conclusion that their models are comparable to OpenAI’s GPT-3, Big Science’s BLOOM, and Meta’s OPT.
 
As one would expect, training LLMs requires a substantial amount of resources, time and experiments. Finding a way to scale and accelerate their ML efforts became a priority for Aleph Alpha. With Weights & Biases’ highly scalable, robust and collaborative environment, it became the obvious solution to help Aleph Alpha in their quest to build state-of-the-art LLMs.

Unlocking Explainability

A unique selling point – in addition to improved performance – that sets Aleph Alpha apart from the competition is the company’s focus on transparency and traceability. Earlier this year, Aleph Alpha introduced the “Explain” feature intended to make the results of LLMs more interpretable to users. This new functionality makes it possible for the system to reason about its output and give users insight into the decision-making process of the model.
 
 

A New Mode of Operations

In the early days of Aleph Alpha, the team was made up of the founders and a few other researchers. While open source platforms met their needs then, as they began to run more experiments and train larger models, those tools quickly became insufficient. This was especially apparent when Aleph Alpha was training their largest model at that time – a 13B parameter model – which as of now, is actually the smallest model of the family.
 
“Our previous solution didn’t scale well with the large number of experiments we were running,” said Samuel Weinbach, VP of Technology at Aleph Alpha. “ It became slow and tedious to manage them, and its capability for experiment comparison was limited.”
 
Coming as a strong recommendation from a colleague, Samuel chose W&B to improve the tracking and overall management of their large-scale experimentations. The interactive visualizations and customizable UI in the W&B platform made it easy for the team to gain a holistic view of their end-to-end LLM operations. Having this level of insight into their model performance enabled them to quickly iterate on hypotheses and see which hyperparameters and model architectures are doing the best – in real time.
 
“W&B gives us a concise look at all projects. We can compare runs, aggregate them all in one place and intuitively decide what works well and what to try next,” said Samuel.
 
Past that, gaining visibility on their entire system utilization helped tremendously with hardware optimization. The team could answer questions like – are we using too many GPU resources, where are our training bottlenecks, what’s the optimal batch size? With LLM training being a resource-intensive endeavor, W&B ensured Aleph Alpha was efficiently maximizing its compute infrastructure.
 
Today, Aleph Alpha is a growing team with more than a dozen like-minded individuals. Facilitating and encouraging collaboration is key to accelerating their LLM training. They need to be able to share results, feedback, and have a single place where all project knowledge is centralized. With their W&B shared workspace, it creates a central hub for all work, enabling seamless communication, sparking idea generation and creating transparency across all stages of the LLM development lifecycle.
 
“Everyone is on the same page around projects,” said Samuel. “We can easily share graphs, notes and findings.”
 

Reimagining the LLM Landscape

While LLMs have taken the world of artificial intelligence by storm, most companies don’t have the means to train these models and are reliant on established tech firms as providers of the technology. But a new wave of generative AI startups are carving out their own space in the LLM landscape, and Aleph Alpha is undoubtedly – one of those emerging players.
 
To meet the operational challenges of training LLMs, Aleph Alpha needs a comprehensive platform to manage their evolving experiments, enable efficient training, and foster meaningful collaboration. To date, the team has used W&B to train 62,000 models over the span of 271,000 hours, with their longest training run being 960 hours. Choosing W&B was the key to efficiently scaling their LLM training and establishing Aleph Alpha at the forefront of the international AI landscape.