Fedora burns twice as much power as Ubuntu

Why do you think that it is other users’ responsibility to fact-check the several pages of LLM content that you chose to copy&paste here?

Frankly, I saw that you had posted AI output and then skipped past it. For me, there is no point in reading it. Because no matter if I agreed or disagreed with it, it ultimately isn’t your content/your opinion, so I see no point in engaging with it, the AI would not see my response.

4 Likes

Let users decide it themselves. It was not even your topic!
Why do you think you should impose your opinions and beliefs on all users of this forum and starting this offtopic discussion publicly?
if you don’t like certain content (that may be useful to others), you are welcome to skip it and read the next message.
if you personally don’t find ai content useful, others do.
shall we prohibit useful info just because you don’t like it?

I’m wondering whether @darthziplock found it useful.

1 Like

To more clarity on this…

it is critical that people cite sources and be explicit about what they have personally tested and what they have not.

If I were going to include instructions that I havent personally used from a pre-AI source like stackoverflow I would bring that into the discussion with a reference to the stackoverflow url where I found the instructions and be very clear I havent tested it myself.

What we are missing with the LLM stuff right now is a way for people to bring this in a citable links…, links that encode the prompt and the service used as context. Cutting and pasting the output of LLMs without the context of how you generated that output is just not helpful for a number of reasons. One of which is an orobus effect which degrades LLM output when LLMs are trained on LLM output instead of human validated knowledge.

So even if you’re putting LLM output into the discussion in good faith by doing it in such a way that its not annotated as LLM content.. you are degrading the accuracy of future LLM responses. In fact that best way I can think of ensuring LLM get worse is by posting bad LLM answers and letting people assume I validated them even if other people say they are wrong, So when I see people doing that, kinda assume that’s their motivation.. to sabotage LLM accuracy.. not to actually help people.

And if you don’t also throw in the prompt you use, you’re doing a disservice to the people you are trying to help. Because what matters for LLM use is the prompting. You really have to ask yourself how are you helping someone who has the same capacity as you to use an LLM to answer questions by just copy and pasting the output without the context of the prompt you use. I frankly don’t understand the point of it. We can all prompt LLMs for answers… but if we all use different prompts we all get different answers so I don’t see the point at all and I don’t think its helpful unless you include the prompt and which model you used it with. Because different models also give you different answers for the same prompt.

You’re also offloading an immense amount of labor on to other people with the expectation that other people will validate what you pasted even though you did not. That is a big problem. Its one thing to make a mistake as a human, its another thing to be willfully negligent and expect other people to pick up the slack for that negligence. That’s borderline abusive from a community standards point of view. its one thing to expect review to catch human mistakes, its quite another to expect human review from others when you arent willing to do that work on the content your putting into the discussion. This is the thing that gets people mad across the ecosystem… its literally orders of magnitude easier to generate output and paste into human centric processes that rely on human review and its overwhelming the capacity of the humans. It makes people angry and its incredibly unempathic and damaging way to use this technology.

You should really think about this behavior. Copy and pasting LLM answers into human centric discussion unannotated really isn’t the help you think it is.

2 Likes

Jef, I can hear you. with all my respect, you are assuming in your statement that:

  1. anyone pasting the same prompt will always get the same quality / response;
  2. anyone has access to the same level of service / subscription
  3. has the same context and interaction (which impacts AI response dramatically)
  4. and even knows how to use AI services efficiently!

which is obviously not true.

have you realized that a good part of questions in this forum could have been easily resolved just by using google AI mode?

while for basic questions I could agree with you, in this case we’re discussing a complex technical issue which involves not only fedora, but thinkpads and intel product and this is a mix where good AI research excels at. Helps me tremendously! The research was posted not only to “help”, but to be discussed: these points were interesting to me personally.

If you don’t see it and don’t use it I guess you are losing a lot of potential that new tools give us. In most cases, the info these reports provide, is not easily accessible or not easy to find.

just to clarify: is ai generated content prohibited in this forum? (as long as it will be clearly marked)

if google AI mode can answer the questions… then people wouldn’t be here.

That’s sort of the point… if AI systems are valauble then there is no need for you to cut and paste unvalidated AI outputs here…people would just you know.. use it and get quality answers.

If you paste AI stuff here you need to validate that it works and take ownership of it.

If we need to spin up a space that is meant entirely for cut and paste of AI outputs we probably do that…but it wont be mixed in with the human centric discussions..it will need to be set apart and be made explicit that its an AI fpaste sort of thing.

I have no problem with seeing the technology explored by those interested in it. Do it.. I’ll support a space so you can do it in a way that doesn’t overwhelm the human centric processes that rely on human review… and helps us build and evaluate ethically trained open source models..

But in the meantime, its really important that you are explicit about AI generated output and explicit about what instructions you’re pasted that you have not validated personally. Super important if you as a human want to build human centric trust.

2 Likes

I understand that some people don’t like AI content.

this sounds like you want to prohibit users posting useful content if it was AI assisted, and you did not answer my question about whether the AI generated content is prohibited in fedora forums.

So i will repeat it: Is it prohibited or not, as long as it is clearly marked?
who decides where it was useful or not?

As to “unvalidated” AI outputs… Are all user posts in forums 100% validated?
Of course one can demand that users will take accountability for everything they write in public forums for free during their spare time (even do fact checking as if they are writing the New York Times column), but it’s hardly realistic.

1 Like

Every instruction that I post is either something I have personally run a system or I have caveated with.. I haven’t run this on my system. That is my personal standard for myself.
By committing to that standard…I build trust.

The moment I post any instruction that I haven’t personally used but did not caveat…I weaken that trust with the other people.

Right now as a community we don’t have a lot of trust around AI. So in order to build trust we need the humans using it to be accountable and make good choices about behaviors.. modeling good behavior is important part of building trust.

Behaviors like copying and pasting Ai instructions without being explicit about whether or not you’ve validated them..does not build trust. Going further and implying your just going to rely on other people to validate what you did not..actively destroys trust.

If you want a space to do that, we can work on figuring out what that would need to look like. It might be a separate category in this forum.. it might be an fpaste like thing… there are options. But copying and pasting of unvalidated instructions here needs to stop. And the copy and pasting of AI generated output that isn’t explicitly marked as such also needs to stop. Having a space that is explicitly communicated for this behavior that serves some useful purpose solves the problematic behavior here in this human centric discussion area.

3 Likes

I would prefer this discussion NOT clog up this thread… But to hopefully add something productive and answer @andym 's question as to whether or not I found it useful, the answer is sort of. Powertop was already covered here, I had previously verified that tuned was installed and running on my system, and I have specific reasons for continuing to use KDE’s built-in power profiles. The AI mentioned

The AI mentioned two things that weren’t previously discussed: checking the frequency scaling driver, and setting the ASPM status. However, neither yielded any improvement as my frequency scaling driver is already using intel_pstate, and the suggesion to enable aspm_powersave via grub boot parameter didn’t appear to make any difference.

Personally, I’d rather discuss directly with a human expert. My experience is that AI is wrong the majority of the time, offers things I already know the rest of the time, and very very seldomly actually provides a solution. I’d rather not burn my planet and trash the job market for a tech offering more problems than it solves.

If anyone has further input on getting the WiFi and ASPM powersave modes working, I would love to try those.

6 Likes

I apologize.
You’re right this should be a separate thread.

4 Likes

I did confirm with lspci -vvv | grep LinkCtl that all devices reported have ASPM already enabled.

1 Like

Because I enjoy the same freedoms you do, to join a conversation if I feel I can contribute.

And this isn’t about my opinions or beliefs, it is about accountability. Which is why I chose to post in here, even though I knew that it was off-topic. In my experience, accountability works best if it is close (time and location) to the behavior. (I did consider starting a new thread, but then it would just have been another “old man yelling at clouds” and you probably would have ignored it. This way, at least we had a conversation about it.)

Edit: I do apologize to @darthziplock for somewhat derailing the thread. I would like to make it up to you by offering some support, but I haven’t had an Intel-based laptop in quite some time.

Again, I did that. And I do that regularly.

You are putting words in my mouth. AI has its uses, but it also has its limitations. You may be aware of them or not, but the issue I have is ultimately a question of ownership and accountability. You pasted the output, but you didn’t own it in the sense that you take responsibility for its contents. I even asked you if you would would vouch for its correctness and effectiveness (highlight added because it will be relevant in a moment), to which you responded: “Absolutely. With some practice you quickly see whether the response is helpful or not.”

Basically, the AI output that you vouched for, was not effective in addressing the issue. And another user spent time and effort to figure that out.

That is at the core of why I started this off-topic discussion. LLMs are great tools, but they have their limitations. They are stringing words (or tokens) together, based on probabilities. In some situations, this is helpful, and in some not so much. But just dumping it here, with the expectation that others will sort it out, that is what I call rude.

I make mistakes, I am human after all. And I am glad if others point them out, this way I can learn from them. But if an AI hallucinates something and another user just throws it in a post here and calls it a day, there is not much value in it for anyone. Even if somebody takes the time to read the AI output, spots the hallucination, and points it out, the person who posted it will probably just shrug it off and move on. It’s the AI’s fault after all, not theirs, and they are most likely not learning anything from it, because it wasn’t their knowledge in the first place.

2 Likes

Please everyone, let’s keep focused on the topic and how to solve a technical problem for the author of the topic :classic_smiley:

4 Likes

I’ve done a little further optimization by forcing syncthingtray (and thus syncthing) to run only on my efficiency cores via taskset, so hopefully that saves a bit of power. Might optimize a few other apps that way, like Joplin or Slack or anything else that does background stuff while running.

It now idles down around 2.9w with a bunch of browser tabs and crap open, and it lasts an extra 1-2 hours in regular use.

It might be as good as it’s gonna get for now. Hopefully there will be further improvements in future kernel updates, given how new this hardware is.

1 Like

Power consumption will differ greatly depending on the use-case. Particularly for models marketed to large enterprises with cubicle farms and offsite laptop users, vendors need to have competitive power usage, and will spec parts and tweak drivers for customer use-cases, including the linux distro. It is good to see the progress you have made, but my experience has been that software upgrades emphasize features over power efficiency. It would be interesting to see a direct comparison for your use-case with a distro that seems to offer better power profiles. You could do that by dual booting.

1 Like

I do have a spare NVME drive I’ve been thinking about putting Debian, Ubuntu or Zorin on for testing, but both of those are still on way-old kernels that don’t have the driver support for this machine yet. Gotta be at least 6.17.10, ideally 6.18

So I’ve encountered a strange issue with powertop: it slows my system down.

If I run it via terminal and watch outputs for a minute, my system gets laggy. Games (particularly emulators) run like crap.

Having the powertop background service enabled for auto-tuning also causes the system to get laggy after a few hours.

Can reproduce 100%, and a reboot clears it and restores performance every time.

Your original story lacks every evidence:
others claim …
similar hardware …
i read …
When you makes statements like that you have to show proof. Where is it?
Are you capable of saying another computer has similar hardware, and even when that is so, how about the differences (similar is not the same), how about the BIOS, is that the same, how about the differences in environment: were both your laptop and “the other” placed in same conditions in a testlab?
One thing I’ve learned in the last 25 years about “people on the internet” is that you can trust nobody.
You say your laptop works for around 10 hours. I say be happy with that, mine barely reaches 3 hours.

2 Likes

You know, your reply really reeks of self-righteousness and ego.

You want “evidence?” Did you not see the part where my own laptop reports far lower power drain with different distros? Is that not “evidence” enough for you? Why don’t you go read the thread and see that I was in fact able to improve my laptop’s power consumption by implementing solutions from forum members who were actually helpful.

Instead, you’re trying to gaslight me like we’re in a courtroom and millions of dollars are on the line if you don’t win your case.

Maybe consider that replies like yours are the exact reason the “year of the Linux desktop” hasn’t happend. A noob asks for help and gets treated like that? Yeah they’re not going to join the Linux movement.

Next time, at least read the entire discussion and see how things have played out before jumping straight to the comments.