I’d like to know if there’s a way to enable boost clock? I’ve installed cpupower but changing the governor to performance doesn’t enable it.
The reason for me asking is that I bought a new PC, with both CPU and GPU from AMD since they are way more Linux friendly, and playing CS:GO gives me micro-stutters which are pretty annoying. As a small addendum: I was expecting close to 300fps, but right now I’m not even close (200 and less).
Specs are Ryzen 5 3600 and RX 5700 XT; under is the output of cpupower frequency-info
$ cpupower frequency-info
analyzing CPU 0:
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: Cannot determine or is not supported.
hardware limits: 2.20 GHz - 3.60 GHz
available frequency steps: 3.60 GHz, 2.80 GHz, 2.20 GHz
available cpufreq governors: conservative ondemand userspace powersave performance schedutil
current policy: frequency should be within 2.20 GHz and 3.60 GHz.
The governor "performance" may decide which speed to use
within this range.
current CPU frequency: Unable to call hardware
current CPU frequency: 3.61 GHz (asserted by call to kernel)
boost state support:
If you are rendering your desktop environment using the CPU’s graphics (I think they call it APU or something) - @vgaetera’s suggestion would certainly help to a greater extent. Though if you are using the discrete GPU to do so, it would still help but to a much less extent.
Plus stutters are more of a model loading issue than a texture loading issue and when it comes to the geometry of in-game objects, it is the CPU which gets to be used more often. Furthermore, information about the following can help us to help you in a much better way.
What display server are you using? (X11, Wayland or something else?)
Thermal clearance of the on-board devices. (May be run a stress test and check throttling)
Are you facing screen tearing issues? (and may be attributing them as micro-stutters?)
@colonel-panic No, not really. I had an issue earlier where the symptoms seemed to be pretty close to the ones I am having at the moment, so my guess was that it could have been the governor this time, too.
However, upon closer inspection, I noticed that I have high load (~99%) and low fps (>150) on the GPU.
I would not do the mistake here of saying “Unused CPU is wasted CPU” because that isn’t really true. Your CPU is clearly breaking a sweat in running stuffs that could have run just fine under the right circumstances. Also, 150 fps is respectable - depending upon the refresh rate of your monitor. That brings me to the next question, is your refresh rate a multiple of the frame rate that you attain here?
I don’t really know how Wayland acts with AMD cards. When it came to NVIDIA, Wayland performed like a complete crap - though it is NVIDIA to blame here, that users had to resort to X11 for respectable performances. You would want to switch to X11 temporarily from the login window to see if it helps.
The way I do it is by running Handbrake once on CPU-only and once on GPU-only to downscale long videos from 1080p to 720p, and check the difference with the raw specifications provided. Folks may find this way of stress-testing very unconventional but at least that’s how I do it. A 5-10% deviation is okay but anything that goes beyond a 15% of deviation is an indication that the performance of your device was throttled due to heat.
When it comes to disabling or reducing throttling, I would totally advise against “treating the symptom” here. Throttling is a safety feature for your device to purposely reduce performance so that it is not pushed further into emitting more heat than it already is. If you built this PC by yourself, then the things that you would want to check is if the cooling tech has been equipped properly or not, and remove obstructions for clean air flow.
I see. If you have tried either Manjaro or Pop!_OS (which I suspect so) - you were using X11 display server there. So we cannot quite compare these with Fedora, being an apples and oranges situation. If it weren’t any of the above - tell me more about it so we can replicate the configs.
Well, in terms of performance, the 2070 is approximately 10~15% more powerful than the 5700XT. It is, however, $100 more expensive, too. But then again, it has a dedicated ray-tracing chip and is generally better on all fields than the 5700XT.
Kinda embarassing, considering that they were released at the same time and AMDs flagship GPU (RX 5700 XT) is worse, in terms of performance, than NVIDIAs entry-level GPU (RTX 2070).
It’s only about now, with the 6000-series, that AMD has become competitive again. About time, too, considering that they haven’t released an “competitive” GPU in the last 10 years - which is why NVIDIA has charged “blood prices” for theirs.
Yeah. It makes picking the NVIDIA cards a no-brainer for machine learning applications and for video games with available ray tracing support. I was skeptical about the differences in lighting conditions and not sure as to how big of a change would ray tracing bring to the table until I saw the demonstration of dynamic global illumination of a popular first person shooter, Battlefield V.
With just a $100 more (even more so beyond the States), the former does seem to provide a significant improvement over the latter - that too being a mid-level card in market. Though I doubt if at all such would be the case for GNU/Linux as well. How inclined would they be to write the ray tracing support for these desktops and which all applications would be able to use them as a benefit?
I would say that both have different use-cases. AMD sports support and NVIDIA sports raw power.
I am glad that with teh RTX 3000 series, the prices of NVIDIA GPUs have gone down significantly, making something as elegant as RTX 3080 accessible and inexpensive. If AMD has come back to form, it now has to compete with NVIDIA in terms of pricing as well as performance. There’s a lot to catch up to for team red.