Not really an issue, but mostly lack of information and proper instructions/support from both AMD and the Openai guys, making the whole process of getting Whisper to work with an AMD card pretty much a coin toss. I had it working twice, and it suddenly broke after something updated.
So, I wonder if any Fedora user managed to get this to work currently. My goal is to create a Distrobox with all the Whisper/Rocm stuff in it and freeze its updates, so I never have to deal with it breaking randomly again.
I’ve tried already to do so, but with no success. In a Fedora 41 Distrobox, I install python-pip
and python3.12
(besides all the weak dependencies like git and ffmpeg). It has to be python3.12, as it’s the only one that will install Torch with Rocm support in the next step. Inside a virtual-env:
pip install torch torchaudio torchvision --index-url https://download.pytorch.org/whl/rocm6.2
I then get my card (RX 7700 xt) to be recognized by Torch with
>>> torch.cuda.is_available()
True
>>> torch.cuda.get_device_name(0)
'AMD Radeon RX 7700 XT'
But even then, after installing openai-whisper
via pip (their recommended method), I still get either HIP errors, or segfaults if I export variables changing the value of GFX version like I’ve seen comments online recommending (HSA_OVERRIDE_GFX_VERSION=10.3.0
, for some reason, even though the RX cards are GFX version 11.0.2 from what I know).
I don’t know what else could be wrong. I was told that I don’t need Rocm installed system-wide, and I don’t even think the repo AMD has for RedHat will work with Fedora. So, if anyone has this working: what am I missing? Is it just an upstream bug?