In collaboration with the Metal engineering team at Apple, PyTorch today announced that its open source machine learning framework will soon support GPU-accelerated model training on Apple silicon Macs powered by M1, M1 Pro, M1 Max, or M1 Ultra chips.
Until now, PyTorch training on the Mac only leveraged the CPU, but an upcoming version will allow developers and researchers to take advantage of the integrated GPU in Apple silicon chips for “significantly faster” model training.
A preview build of PyTorch version 1.12 with GPU-accelerated training is available for Apple silicon Macs running macOS 12.3 or later with a native version of Python. Performance comparisons and additional details are available on PyTorch’s website.Tag: Metal
This article, “Machine Learning Framework PyTorch Enabling GPU-Accelerated Training on Apple Silicon Macs” first appeared on MacRumors.com
Discuss this article in our forums