PyTorch/XLA 2.4

Researchers and practitioners can use the open-source PyTorch machine learning (ML) library and XLA ML compiler for flexible, powerful model

The PyTorch/XLA team is pleased to release 2.4 today. This version improves on the previous release and addresses developer concerns

Although the XLA compiler can optimise your models, custom kernel code can give model writers with better performance

Pallas, a bespoke kernel language that supports TPUs and GPUs, lets you write faster Python code closer to the hardware than C++

This means PyTorch/XLA 2.4 generates the compute graph of operation before sending models to the XLA device target hardware

Google Cloud adds a “mark step” call to each PyTorch action on TPUs to force compilation and execution

Good thing your code works with PyTorch/XLA 2.4 despite API changes. New API methods will facilitate future development

Accelerated Linear Algebra, or XLA, is an open-source machine learning compiler