Not to pick on @boredsquirrel, but this brings up something I’ve been thinking about. I noticed that the Home Assistant community forum recently banned answers from ChatGPT (read post about it, and many replies, here). The problem is that it is really good at producing answers with look good and which sound authoritative, but which are not validated. Fundamentally, the algorithm doesn’t know or care. (This is very similar to the way AI-generated art tends to have trouble “understanding” the number of fingers a human being should have. That’s a big leap for an AI model without some external guidance.)
I don’t think there’s a problem with playing with it, and seeing if things it generates might work, but:
I am concerned about the vast potential for more-sneaky spam (which a ban won’t really solve), and
The process of helping folks with the many obscure problems they find already requires a lot of patience and guesswork. [1]I don’t think we should add another layer to that.
And, again, not to pick on the poster who I’m responding to (I think they’re acting in perfectly good faith), but… I don’t think a lot of “chatgpt did this… does it work?” posts will add much value.
So, I propose:
If you post something that was generated by ChatGPT or similar tech:
Test it first and make sure it works, before posting.[2]
Clearly label it as such.
Try to add some human explanation to anything “magic” — like, the comment # offset in the linked post isn’t really very helpful about what that’s there for.
Searching for parts from the generated text may lead you to human-created source material which addresses the same problem. Link to that, if you can!
If you see an unlabeled post that looks suspiciously AI-ish (especially if it’s a new poster, or a big change in tone), flag it for moderator review.
If it doesn’t work but you think might be useful with a little tweak you just can’t figure it out, explain what you expect and what happens instead, just like any good bug report. ↩︎
… how do we know this proposal wasn’t written by ChatGPT? It sounds awfully friendly towards ChatGPT and friends.
On a more serious note, many of those points I think would apply regardless of how the content of the post was derived. I may just lack immagination, but other than perhaps badges and/or community “cred” here, I’m not sure how much draw there would be for someone to manually read posts, feed it to ChatGPT and then post the results blindly.
–
This post may contain human originated content lacking ML polish and has not passed any automated verifications. Viewer discretion is advised.
I see no reason why we wouldn’t want AI generated output to be clearly identified as such.
There seems to be an uptick in AI generated spam in most forums in recent weeks. Having a rule that requires it to be identified could help isolate some of that.
It is a lot easier than searching the web for links (which appear on the surface to be relevant) and posting them blindly which happens quite a bit on ask.
It is usually well intentioned but often people try to help even though they don’t have the experience to differentiate between relevant information and that which isn’t.
I generally like the “provide the source” approach stakeoverflow has to its answers and it should be applied here. If you intend to work with ChatGPT-generated answer, you MUST cross check it with the official project documentation, man pages,… whatever real source you can find.
We should be careful about the “suspiciously AI-sh” things, as I would expect that many non-native speakers may sound weird when using translator engines (though to be honest, modern AI usually uses a much better grammar than non-native speakers like me use). But we can resolve this on case by case basis.
We maybe can add a similar note for docs, Magazine and Community Blog articles. I believe that while one can use ChatGPT-like solutions to help with writing, doc reviewers and editors should not be used as the first level gate for such content. One must do the proof reading and testing of the content before sending it for review.
But I am not sure if this is needed as a rule now. I believe we don’t have this problem yet. Maybe let common sense guide this one
As of right now, the “markers” I see in chatGPT text are very different from what a non-English speaker might write, even translated – because the translation engines try to stay close to the original. It’s more … kind of weird repetitions and re-phrasings, and weirdly confident statements.
In general, in my experience, it wants there to be an answer. It writes more consistently than a human – which means it’s both kind of bland, and also since it has no idea what is true it will make up facts that fit that consistency — not logical consistency, but consistency of text flow. Like, if there’s a function call that converts something from one format to another, it might assume that there is also function to reverse the process – and possible similar functions for other formats that would be nice to have. (This will probably improve in the future, but that’s a lot of what they’re dealing with on the HA forum.)
My thought process is that a response from ChatGPT that is wrong would not be so different from a human response that is wrong. In both cases someone will likely chime in with an alternative solution and perhaps a rebuttal to the “offending” post.
I think these guidelines are good for those who want to use ChatGPT to answer questions as part of managing the social contract. Basically we’re communicating that we would prefer humans who know what they’re talking about to answer with sources or research rather than generated text. That on its own could prompt the well-intentioned folks from not doing it or doing it right. The not-so-well-intentioned may just pretend they are the AI, in which case see my first point.
I’ll also take this time to say that AI makes me sad, lol.
Honestly speaking - that’s exactly how I write code.
I assume that since this operation is logical, there should be a library function for that. But then I go to stackoverflow to find the exact name for it and read relevant info
And that’s the reason why I like Python - this approach almost always works.
Not just ChatGPT, many other AI-generated chat bots like YouChat or neevaAI and many others will appear this year. so a regulation is needed. or a moderation MAYBE GOOD TO HAVE.
I would like to share a doubt.
What is the difference between a text copied and pasted from the first website found on a search engine, and the result of chatgpt?
How many sites contains inaccurate information (wrote by humans) and or deprecated solutions or solutions that don’t work?
On forums, chats, mailing lists, it is not so unusual to find answers (wrote by humans) not verified by the poster, not pertinent to the question, wrong, and so on.
If it is not an AI that directly answer here why copied and pasted solutions generated by them should be treated in a particular way?
chatgpt sucks. I just asked it to tell me where I can find the fedora silverblue 37 image creation source code and it gave me a lengthy explanation with an invalid url to nowhere LOL
It’s garbage for this kind of use case.
However, if you ask chatgpt to write you a poem about truck, micheal jackson, or flowers, then it can provide something interesting and relevant.
Yeah, but just try to get it to write a sestina or some other specific poetic form! It will fail, and the accurately describe what it should have done but didn’t.
In the case of my post, I asked the AI of a solution and many ideas were really good. It can help for a newbie to get at least any kind of idea. But in the end it didnt work, so thats why I was just asking “I found these commands, edited them a bit, can you find the errors and help?”
There is no documentation about hibernation on Silverblue, and even less a ootb implementation
ChatGPT can be an effective tool for enhancing the written language of non-native speakers of English, as I have experienced firsthand in editing the README files for several of my computer programs. However, while it has been useful in this regard, I have found that it is not well-suited for programming, as the code generated by the tool often fails to compile. Additionally, when I posed relatively simple questions related to my field of study (paleobiology) to ChatGPT, I was disappointed to receive incorrect answers. While I am currently on the waitlist for the “new Bing,” I am concerned that AI tools such as Microsoft’s new Bing and Google’s Bard may pose a threat to our civilization, given their inability to accurately answer even basic questions.
In my experience, the responses from ChatGPT have consistently fallen short of accuracy, suggesting that its utilization should be restricted on online forums. At its core, it operates solely as a text editing tool. However, while I acknowledge that ChatGPT’s command of the English language exceeds my own, I also find its language output to be lacking in naturalness and genuine expression. As I read these words, I sense a disconnection from my own identity.
Again. I express my doubts.
Do you imply that human generated answers are always good and functional? If you ask to generate source code to me… it will not compile for sure!
If you look at forums, you know, you could find a bunch of wrong (human generated) answers that don’t work or that are not in topic.
Many times I contacted support lines (telephone provider, hardware vendor, etc.) and the helpdesk (human) operator always replied with predefined answers, they did not understand my issue, they provided wrong directions, they pointed me to unrelated documentation. But I know: it is a first-level support, I can’t pretend they have in depth knowledge of all the topics.
There was a time where people said: “I read it on [whatever reputable] newspaper”. Or a time where people said: “I heard that on TV”. Or even: “I found it on the Internet”. All that implying that what they read/heard/found was absolutely true. Well, no. Not necessarily. You should always double check answers and information, you should check the sources.
Same thing here: it is an AI generated answer? It is not necessarily correct. (But also human generated answers aren’t always correct or perfect ).
Yep. But was the AI that autonomously created an account here and automatically posted the replies?
What is the difference between people publishing AI generated answers (without noticing it) and people publishing random stuff generated by their own brain or the information grabbed from the first website they found? I think that in both first two cases cases who published such stuff could be treated as jokers (or trolls), or inaccurate people in the third case.
Why AI generated stuff should be treated as a special case? (Just for the sake of discussion).
Right. Why allow (or encourage?) more certain noise on top of that?
No — it was a human with some ill intent.
Sure — if a person constantly trolls or generates (from their own human mind) unhelpful nonsense, we would intervene.
AI-generated content is something we can specifically identify as a problem. Copying and pasting from Wikipedia without attribution is another kind of the same thing common on Q&A forums, but because of the specific nature of most of our questions not a big problem here. (It was on Photography Stack Exchange, though, when I was a moderator there, for example.)
Sure. But my point is the motivation. Just because it is AI-generated [1], or actually (like your Wikipedia example) because it is something (like any other source) copied and pasted without attribution?
I read around many concerned people, describing such stuff as evil, like something that could replace their work; well yeah, if your writes are poorly curated, an AI could do it better. Like an AI could not compete with Giotto, but it could make better drawings than me ↩︎
I’m not worried about it replacing people. The problem is it can “write” In a way that sounds authoritative and “expert”, but is full of optimistic nonsense. It’s good at appearance, not fact. And it is specifically good at that in a way which might be hard for intermediate users to validate.
Regardless of any benefits, it is specifically useful to spammer, trolls, and troublemakers. Requiring validation and labeling leaves room for possible positive use, while hopefully curbing those abuse early.
I’ve seen usenet destroyed by spam. Email is going that way too even if not everyone realizes it. Google search has been ruined by content farms. We had to make barriers to Fedora Wiki edits. Let’s get ahead of this here.