PyTorch 1.13 Released
Version 1.13 of PyTorch was released on Friday, bring many changes including CUDA 11.7 support, better M1 chip support, and more.
Created on October 31|Last edited on October 31
Comment
PyTorch 1.13 was released on Friday along with updates to many of its companion libraries. In this big update, CUDA 11.7 support was added while older versions are deprecated, Apple's M1 chip gets increased support, BetterTransformer gets its stable release, and much more.
PyTorch 1.13 highlights
- BetterTransformer stable release: The BetterTransformer API, initially released with PyTorch 1.12 in a prototype state, is now stable. BetterTransformer speeds up transformer model inference with fastpath execution. Check here to learn more about BetterTransformer.
- CUDA version changes: Support for CUDA versions 10.2 and 11.3 is now deprecated, allowing for the more focused integration of newer CUDA versions into PyTorch as they release. CUDA 11.7 is now supported alongside CUDA 11.6, which was already supported in PyTorch 1.12.
- functorch integration: The functorch library (still in a beta state) is now integrated directly into the main PyTorch package, allowing users to import functorch without needing to install a separate package.
- Intel VTune Profiler support: Profiling PyTorch script execution with the Intel VTune Profiler on Intel systems is now in beta.
- New NNC options: BF16 and channels last optimization are now supported (in beta) for x86 CPUs within TorchScript.
- M1 Device Support: Prototype support for Apple's M1 chips was first added in PyTorch 1.12. That support is now in beta.
Find out more
Read the announcement blog post for more information on the PyTorch 1.13 highlights, or check out the full changelog if you're curious about all the changes made in this update. You can also read about all the updates made to PyTorch's companion libraries here.
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.