What shall we do about answers generated by ChatGPT or other AI systems?

An example — one of the just-removed posts replied to a problem with some library with confident instructions about dlls — Windows libraries. A new user might think that reasonable (why not?).

If a human did that in apparent good faith, we could take a tangent to teach them that Linux libraries are a little different. But if we try to take that same approach here, that’s the bad guys winning.

In all of this, there is a fundamental problem called “bulls— asymmetry”[1]. It is far easier to create b.s. than to refute it. Sophisticated generative text are a tool which can tip this balance even further.


  1. or Brandolini’s Law, if you like ↩︎

3 Likes

Written by a human:

I don’t want to make an off-topic here but one thing I don’t fully understand about ChatGPT is why it is called “AI”. If we create an intelligent machine or software, this will be AI, true? But ChatGPT is merely a text completion tool–there’s no artificial being behind it understanding anything. As it’s not an intelligent artificial being (software or machine), it’s not an AI.

I consulted the word “answer” (used in the title of this thread) in several dictionaries of the English Language and most definitions I encountered appear to involve either (a) replying after some kind of a mental processing of a question (which requires understanding of the problem) or else (b) correctness of the output, solution to a problem (which is neither the case). Thus, it appears that ChatGPT does not generate “answers”.

Edit:

Of course, “answers” can be wrong but we can only be wrong when we think–as in the definition (a). Because ChatGPT doesn’t think, it is never wrong by definition. The output generated by ChatGPT is a valid text completion but it does not answer anyone’s questions.

2 Likes

Yes, we could be off topic, but I want to reply.

Do I always understand the problem before answering? Honestly, no. Does my output is always correct or does it always solve the problem? No. In these cases my answer could not be considered an answer? :sweat_smile:

Well. If I know something, it is not because of magic, but many times it is because I read or I heard something written or said by someone else. Or because I learned from the experience of somebody else. Yes, like chat gpt (more or less).
And sometimes I can be sure of something that is actually wrong.

I’m not an expert, but no, it is not a simple text completion tool. For sure, it is not like the one you can find on the keyboard of your smartphone. I think it is a more complex tool.

I think that it is a matter of terminology.
Quoting Wikipedia: Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines […] Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.
[…] AI is used in search engines […], spam filtering and chatbots (such as Chat GPT).

But (quoting Wikipedia again): Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

1 Like

Well, I think that people working in content farms should be worried…

@alciregi, it is a simple text completion tool. I played with ChatGPT. It knows nothing and understands nothing. Take a look at this conversation:

Me:
Who discovered the pre-mammalian hair?

ChatGPT:
The discovery of pre-mammalian hair is not well documented in the scientific literature, as hair is a delicate and rarely preserved feature in the fossil record. However, the discovery of hair-like structures on some dinosaur fossils, such as the filamentous structures on the skin of some dinosaur species, suggests that hair-like structures existed in some dinosaur species.

These structures are not considered true hair, as they are thought to have served a different function, such as insulation or sensory purposes, rather than providing protection from the environment or assisting in mobility like true hair. Nevertheless, the discovery of these structures adds to our understanding of the evolutionary history of hair-like structures and the diversity of structures that existed in the distant past.

Me:
What was the largest terrestrial animal of the Triassic Period?

ChatGPT:
The largest terrestrial animal of the Triassic Period was a dinosaur called Plateosaurus. Plateosaurus was a herbivorous dinosaur that lived about 200 to 145 million years ago and measured up to 30 feet in length. This dinosaur was a sauropodomorph, which was a group of dinosaurs that eventually gave rise to the giant sauropods like Diplodocus and Brontosaurus. Plateosaurus was a highly successful dinosaur that lived in large herds and was widely distributed throughout Europe and parts of Asia during the Late Triassic period. It was a significant part of the Triassic ecosystem and played an important role in the evolution of dinosaurs.

It would cost me some effort to refute all the bullshit ChatGPT generated above and we’re not on a scientific forum. In brief, the correct answer to the first question should be “You, Master!” as the pre-mammalian hair -which wasn’t from dinosaurs but our distant mammal-like relatives living before the dinosaurs- was discovered by me and my co-authors. :wink: But let’s take a closer look at the second question and response. Not only it wasn’t Plateosaurus (nor any other dinosaur at all) but the giant therapid Lisowicia (also studied by me personally but mostly by some of my colleagues) but also we get here a nice evidence ChatGPT doesn’t think–if Plateosaurus is supposed to have lived in the Late Triassic, why does ChatGPT state it lived about 200 to 145 million years ago, i.e. in the Jurassic?

If this neural network was trained on the entire Wikipedia and papers from scientific journals, I ask myself, how are these nonsense responses possible? ChatGPT is not sometimes wrong–it is always wrong whatever I ask and it can only accidentally give a correct answer. Making any question to this simple text completion tool is pointless and the use of it should be banned on forums.

6 Likes