Skip to main content

PyTorch 1.12 Released

PyTorch 1.12 has released, bring a large list of changes and improvements.
Created on June 29|Last edited on July 1
The newest version of PyTorch has finally rolled out. PyTorch 1.12 comes with a huge list of changes, including new features and bug fixes galore. The last big update came out in March, so there's a lot to catch up on.


What's new in PyTorch 1.12?

There's way too much changed in this version of PyTorch to list everything here, so I'll try to summarize and highlight the important stuff featured in the update blog post. You can take a look at the full GitHub changelog here.
  • TorchArrow: TorchArrow is a new library released in beta with this update. The library is built to speed up preprocessing over batch data through an easy-to-use API.
  • Module: A new beta feature for Module computation is the functional API. This new functional_call() API lets you take complete control over parameters used in a Module's computation.
  • Complex Numbers: This update introduces two new features for complex numbers. The first is new functionality for complex convolutions, and the second is experimental support for the Complex32 datatype.
  • TorchData: DataPipe has improved compatibility with DataLoader. PyTorch now supports AWSSDK-based DataPipes. DataLoader2 has been introduced as a way to manage interactions between DataPipes and other APIs and backends.
  • functorch 0.2.0: PyTorch's own JAX-like feature set sees significant coverage improvement for functorch.jvp and APIs that rely on it. The new functionalize() method allows a function to be passed in and returns a non-mutating version of it.
  • nvFuser: nvFuser is the new, faster default fuser for compiling to CUDA devices.
  • Matrix Multiplication Precision: By default, matrix multiplication on float32 datatypes will now work in full precision mode, which is slower but will result in more consistent outcomes.
  • Channels Last Memory Format: New support for Channels Last memory format on computer vision models brings big performance gains on vision model inference.
  • Bfloat16: Less precise datatypes provide much faster computation times, so new improvements to the Bfloat16 datatype have been made in 1.12.
  • Accelerated Training On Mac: Support for taking advantage of Apple silicon GPUs in PyTorch has been added in a prototype state. Apple's Metal Performance Shaders are used as the backend for operations done on these GPUs.
  • BetterTransformer: Certain Transformer Encoder modules get a performance boost with new fastpath implementations.
  • FSDP API: Released in version 1.11 as a prototype, the FSDP API hits a beta release with version 1.12's release, adding a number of it's own improvement.

Find out more

Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.