What shall we do about answers generated by ChatGPT or other AI systems?

An example — one of the just-removed posts replied to a problem with some library with confident instructions about dlls — Windows libraries. A new user might think that reasonable (why not?).

If a human did that in apparent good faith, we could take a tangent to teach them that Linux libraries are a little different. But if we try to take that same approach here, that’s the bad guys winning.

In all of this, there is a fundamental problem called “bulls— asymmetry”[1]. It is far easier to create b.s. than to refute it. Sophisticated generative text are a tool which can tip this balance even further.


  1. or Brandolini’s Law, if you like ↩︎

3 Likes

Written by a human:

I don’t want to make an off-topic here but one thing I don’t fully understand about ChatGPT is why it is called “AI”. If we create an intelligent machine or software, this will be AI, true? But ChatGPT is merely a text completion tool–there’s no artificial being behind it understanding anything. As it’s not an intelligent artificial being (software or machine), it’s not an AI.

I consulted the word “answer” (used in the title of this thread) in several dictionaries of the English Language and most definitions I encountered appear to involve either (a) replying after some kind of a mental processing of a question (which requires understanding of the problem) or else (b) correctness of the output, solution to a problem (which is neither the case). Thus, it appears that ChatGPT does not generate “answers”.

Edit:

Of course, “answers” can be wrong but we can only be wrong when we think–as in the definition (a). Because ChatGPT doesn’t think, it is never wrong by definition. The output generated by ChatGPT is a valid text completion but it does not answer anyone’s questions.

2 Likes

Yes, we could be off topic, but I want to reply.

Do I always understand the problem before answering? Honestly, no. Does my output is always correct or does it always solve the problem? No. In these cases my answer could not be considered an answer? :sweat_smile:

Well. If I know something, it is not because of magic, but many times it is because I read or I heard something written or said by someone else. Or because I learned from the experience of somebody else. Yes, like chat gpt (more or less).
And sometimes I can be sure of something that is actually wrong.

I’m not an expert, but no, it is not a simple text completion tool. For sure, it is not like the one you can find on the keyboard of your smartphone. I think it is a more complex tool.

I think that it is a matter of terminology.
Quoting Wikipedia: Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines […] Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.
[…] AI is used in search engines […], spam filtering and chatbots (such as Chat GPT).

But (quoting Wikipedia again): Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

1 Like

Well, I think that people working in content farms should be worried…

@alciregi, it is a simple text completion tool. I played with ChatGPT. It knows nothing and understands nothing. Take a look at this conversation:

Me:
Who discovered the pre-mammalian hair?

ChatGPT:
The discovery of pre-mammalian hair is not well documented in the scientific literature, as hair is a delicate and rarely preserved feature in the fossil record. However, the discovery of hair-like structures on some dinosaur fossils, such as the filamentous structures on the skin of some dinosaur species, suggests that hair-like structures existed in some dinosaur species.

These structures are not considered true hair, as they are thought to have served a different function, such as insulation or sensory purposes, rather than providing protection from the environment or assisting in mobility like true hair. Nevertheless, the discovery of these structures adds to our understanding of the evolutionary history of hair-like structures and the diversity of structures that existed in the distant past.

Me:
What was the largest terrestrial animal of the Triassic Period?

ChatGPT:
The largest terrestrial animal of the Triassic Period was a dinosaur called Plateosaurus. Plateosaurus was a herbivorous dinosaur that lived about 200 to 145 million years ago and measured up to 30 feet in length. This dinosaur was a sauropodomorph, which was a group of dinosaurs that eventually gave rise to the giant sauropods like Diplodocus and Brontosaurus. Plateosaurus was a highly successful dinosaur that lived in large herds and was widely distributed throughout Europe and parts of Asia during the Late Triassic period. It was a significant part of the Triassic ecosystem and played an important role in the evolution of dinosaurs.

It would cost me some effort to refute all the bullshit ChatGPT generated above and we’re not on a scientific forum. In brief, the correct answer to the first question should be “You, Master!” as the pre-mammalian hair -which wasn’t from dinosaurs but our distant mammal-like relatives living before the dinosaurs- was discovered by me and my co-authors. :wink: But let’s take a closer look at the second question and response. Not only it wasn’t Plateosaurus (nor any other dinosaur at all) but the giant therapid Lisowicia (also studied by me personally but mostly by some of my colleagues) but also we get here a nice evidence ChatGPT doesn’t think–if Plateosaurus is supposed to have lived in the Late Triassic, why does ChatGPT state it lived about 200 to 145 million years ago, i.e. in the Jurassic?

If this neural network was trained on the entire Wikipedia and papers from scientific journals, I ask myself, how are these nonsense responses possible? ChatGPT is not sometimes wrong–it is always wrong whatever I ask and it can only accidentally give a correct answer. Making any question to this simple text completion tool is pointless and the use of it should be banned on forums.

6 Likes

I think that the output you would get is dependant upon the input used. Since it is based on generative principles I would think that merely inputting scientific papers on the breadth of a topic of interest is not enough. Wouldn’t it be similar to handing those same papers to a random person to read? How many of would digest the information, and even if we remembered it verbatim how many would truly understand?

Gemini has been as dumb as a rock for me, Not producing good results after detailing specifics or being out right wrong. Granted, most of what I ask is Computer Science, Administration related but they are typically way off. When it can reference a website it has produced better results.

I have been asking it about some out of print books, and they have produced interesting results but it’s few and far between. I think these AI’s are suited for more trivial things like website, task, and current info.

The paid features of ChatGPT stuggled for me. I had directed it to some man pages online and the results were mixed. Which to me is a failure since it’s sole focus is the content I provided. If it can’t read man pages and produce good results it’s years away from being a good tool.

On re-reading, I realized the funny[1] thing happening here. It’s clear to me as a human with the context of my general knowledge who discovered the evidence that such a thing existed in the past. However, ChatGPT conflates that with a question like “who discovered fire”, and tells you that it there’s no record of what human was around in pre-mammalian times to notice that there were non-mammalian creatures with hair walking around.


  1. not, like, “funny ha-ha” ↩︎

Gemini . . . If you are under 18, no Unsafe C++ for you !

4 Likes

That’s too funny

1 Like

If the AI post in-question is Wifi not working (dual-boot Windows 10 Pro + Fedora 37) - #4 by mattdm then heck no. That’s a wall of time-waste, and I’d ban anyone throwing that or anything remotely like that as a “solution” on a discussion forum (which wasn’t even a solution or any troubleshooting). I felt like I was going through Meta’s bs VR support reading that :stuck_out_tongue:

If I see that widespread here, I’ll find another community. I am quite against others involving time-wasting AI if I’m looking for a solution. I can understand it being kind of a warm-welcome though, but that should be only shown to that user, and nobody else on the thread (unless they opt-in to hear AI text I guess).

If people want to get answers from AI, they can plug it into whatever the hot AI thing is they want. Why would I want others doing that for me?

Or maybe keep it only in casual forum areas. Not tech support. I’m here to find solutions for issues, and occasionally provide hand-typed solutions to others. AI has no place with that. I’d expect that from Ubuntu or anyone else before Fedora.

That’s all anyone would need to do it, and it wastes the time of anyone else who has to read that wall of bs. If I want to hear AI results, I can get my own AI results, and likewise anyone else can do it to; LMGTFY. Wasting my time with AI gets a nice chew out, and my questioning of the community that allows it.


I may feed info to AI indirectly, but I’m certainly not open to participating in it. If Fedora introduced AI to the distro or even hinted at it with a proposal (something like Copilot), I’d distro hop immediately and eventually start dropping anything I use RH/IBM. Even toying with the idea of allowing AI on the discussion forum has me concerned. The rest below is just more ranting :stuck_out_tongue:

AI is just like MTX in games. Starts off innocent, and every new item has to out-shine the previous for sales, leading to P2W, loot boxes, and other nonsense. Windows 11 is barely feature-comparable to 10 and they’re focusing on integrating AI. Meanwhile VR is still inconsistent on W11 23H2, just like the initial W11 release, and pre-releases before that. Heck I’m sure they’ll locally AI-generate Cortana as Bonzi buddy on the desktop before allowing me to hide the clock in the taskbar again (like W10 and older did). Somehow AI has priority over high-end video/gaming hardware and basic features. Linux can’t even get people on-board for Wayland, because that’s inconsistent and feature-incomplete too regardless of what people have been saying for years and today. Everyone seems to like Meta’s move fast break stuff stance, which is silly on operating systems.

AI is unstable, inconsistent, and as already shown full of hot air and fluff. Putting that onto an operating system doesn’t fly right with me (still questioning unstable OOM killer), and luckily Linux is full of other choices.

Is possible to add tags to comments? I think a possibility is to ask people to say: Hey an AI answer me this, have you tried or does it make sense?

There is actually a feature for this — when replying, you can click on the arrow in the top left of the compose dialog, and choose “toggle whisper”. Whisper posts are only visible to people in select groups — here, trust level 3 and up, plus mods and admins.

This seems like a good use for that. Of course, as with any such thing, please be careful what you write — the post may be hidden from most people, but isn’t really private. I think “this looks spammy” when something legitimately does is generally fine, but don’t write anything you’d feel bad about ending up on Reddit or Phoronix or something.

2 Likes