Lightning AI Releases Thunder Compiler
A new tool for making lightning models even faster!
Created on April 3|Last edited on April 3
Comment
Lightning AI, in collaboration with Nvidia, recently unveiled Thunder, a novel source-to-source compiler specifically designed for the PyTorch open-source machine learning framework. Thunder aims to significantly enhance the efficiency of AI model training by leveraging multiple GPUs, promising up to a 40% faster training time for large language models compared to traditional, unoptimized approaches. This development is particularly notable as it addresses the challenge of maximizing GPU utilization without necessarily increasing the quantity of GPUs deployed.
Thunder
Thunder, which is freely available under an Apache 2.0 license, represents a strategic advancement in the field of AI development. Its integration with Nvidia’s product ecosystem, including torch.compile, nvFuser, Apex, CUDA Deep Neural Network Library (cuDNN), and OpenAI’s Triton, underscores Lightning AI’s commitment to pushing the boundaries of deep learning capabilities in PyTorch, in alignment with the contributions of other leading entities like OpenAI, Meta AI, and NVIDIA.

Built by Experts
The compiler is spearheaded by PyTorch core developer Thomas Viehmann, whose expertise in making PyTorch operational on a variety of platforms is well-regarded within the community. Lightning AI CEO William Falcon highlighted Viehmann’s pivotal role in driving forward the company’s performance enhancement initiatives for the PyTorch and Lightning AI community.
This technological leap forward by Lightning AI is timely, considering the increasing complexity and resource-intensity of training large language models. With adversarial AI posing significant risks by training LLMs to manipulate and deceive, the need for more efficient and effective training methods has never been more critical. Lightning AI's Chief Technology Officer Luca Antiga pointed out the inefficiency in the current practice of using more GPUs to address performance shortfalls. Thunder, alongside Lightning Studios’ profiling tools, aims to optimize code for better performance, enabling users to achieve faster and more scalable model training outcomes.
A Simple Example
import torchimport thunderdef foo(a, b):return a + bjfoo = thunder.jit(foo)a = torch.full((2, 2), 1)b = torch.full((2, 2), 3)result = jfoo(a, b)print(result)# prints# tensor(# [[4, 4]# [4, 4]])
Availability
Thunder is now accessible following Lightning AI’s release of Lightning 2.2 in February, and it is positioned as a pivotal tool for developers, researchers, scientists, startups, and large organizations looking to enhance their AI development workflows.
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.