Downgrade CUDA for using Pytorch

Hi,
I recently install nvidia pkgs with RPM Fusion, however, I found that CUDA was installed as 12.8 which Pytorch doesn’t support yet in this moment.

How can I downgrade CUDA or do something else to use Pytorch+CUDA?

Thank you!

nvidia-smi                                                              ──(Wed,Mar26)─┘
Wed Mar 26 22:00:18 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.133.07             Driver Version: 570.133.07     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4060 ...    Off |   00000000:01:00.0 Off |                  N/A |
| N/A   40C    P3             12W /   70W |      12MiB /   8188MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            2566      G   /usr/bin/gnome-shell                      2MiB |
+-----------------------------------------------------------------------------------------+