PyTorch For Fedora - Asahi

Hello, I am currently working with a few people to package PyTorch for Fedora on various accelerators. Currently, we have CPU only shipped but are planning on packaging ROCm next. If anyone would like to join the endeavor, we have a meeting tomorrow, Thurs Jan 4, at 9AM EST. Here is our SIG page and here is the meeting info:

PyTorch for Fedora
Thursday, January 4 · 9:00 – 9:30am
Time zone: America/New_York
Google Meet joining info
Video call link:
Or dial: ‪(GT) +502 2458 1186‬ PIN: ‪938 297 358 4693‬#
More phone numbers:
Or join via SIP:

If you have any questions, email me at Thanks!

Are they ever going to merge an OpenCL backend upstream? The only ML backends we can support are:

  • CPU (ARM64 NEON, nothing specific to Asahi)
  • OpenGL compute (I think TensorFlow-Lite supports this already? But only for running completed models.)
  • OpenCL (support coming soon, also Vulkan compute further down the line)
  • ANE (lots of things TBD, will need a custom backend, only useful for running fp16 nets, not training or anything fp32)

IOW, the major useful thing for Asahi right now would be for PyTorch to support OpenCL.