That I doubt. There is one article on Red Hat written by someone using the same name 6 years ago. About how to use ls. I did find a couple of posts on the mailing list from someone with a similar user name who was pitching articles back in Feb 2019 who was told no because they were ripping off Red Hat documentation. He appears again in April 2019 but that doesn’t seem to go anywhere, since there is an article about awk posted in the magazine in April 2019 by… yourself. So you obviously got dibs on publishing first.
It would be the same for me too. I would want to write something people actually find useful that I actually use myself, and at least know works before submitting it for review. I would at least expect it to be reviewed to make sure it worked before it was green lit and posted on the Fedora Magazine… a landing page for those wanting to know more about Fedora and what it can do. Spelling mistakes in an article and erroneous symbols in a command are one thing, but AI generated articles that just don’t work are another entirely.
Again it’s the same for me. But I’m from the era were “lurk more” and “RTFM” were mantras regardless of what site you joined up to.
Okay, this is a point I have always agreed with (WRT verification of what is written). Even in my lacklustre AWK article, I tested things, as did other writers of the many articles published by the Magazine. It is important for the community, and generally speaking no one wants to give shoddy technical advice. The magazine is a showcase of our community as much as the Fedora Project in general.
edit: Full disclosure, I am a Fedora Magazine Editor, though in abstentia. The editors do try to vet as much as they can, but right now those duties are largely falling to two individuals (@glb and @rlengland ), who do have day jobs and well a life outside of the Fedora Magazine editing role.
This is my main point, as I mentioned, thankfully no commands broke anything, they just failed because they were incorrect. But like I said, there are people who will just copy and paste stuff into their terminal and hope for the best. In the short time I have been here, I have seen at least two forum posts asking for help that include the phrase “I asked AI… and that didn’t work”. Which isn’t surprising. They would honestly be better advised to “RTFM” and then come back and ask their questions.
Bingo.
I understand time constraints on editors, as well as everyone else involved in the project that juggles a work life and home life. And how few are involved in an important piece of Fedora’s front facing infra. I would be more than happy to help, even if it is just knocking out the commands included in an article on bare metal or a VM to see if it works or not, then slap it with a PASS or SHALL NOT PASS tag.
This is actually pretty tough issue. We need to make sure people are incentivized to tell us how and what they are using LLMs for. Right now people are incentivized to just use the tooling and not tell us how or why. We, collectively, need to develop a sense of what is helpful and ethical in terms of tool use and what is not.
Ultimately I want to see good content and if LLMs help authors and reviewers ensure good content then its a win right?
If for example we wanted to cross-pollinate content into multiple human languages instead of just English would LLMs use help make content for those other languages?
If someone is able to author solid technology content in their native language and then use LLMs to help translate it back to English is that an inappropriate use of the tool?
Would the use of agentic AI like cursor AI to both construct and test the codeblocks embedded in the articles to help make sure the commands are functional, accurate and safe? if so, is that a bad use?
What if moving forward all commandline code block instructions had to be passed through an agentic AI like cursor AI as a review check as a way to better ensure that bad cmdline code blocks aren’t in the published draft? What if this became the normal review process? Would that be a bad use?
This is not necessary a sign of AI, it’s more like to mimic the dev.to style of writing. This style is declining there too, but it was a very popular style to add an emoji on each title/section header, maybe is an old author there coming to the magazine?
I don’t like that either and also not in favor of having AI generated articles.
I agree that controlling AI use in articles would be difficult. Who would decide if something was AI generated or not? Where is the line between an entire article being AI generated and AI used to assist.
That being said, not matter how an article is created, shouldn’t there be some type of review process to ensure that the content of the article is good? That dracut article is of sufficiently poor quality that even a brief review should have surfaced quite a lot of concerns.
Richard did question whether that article should be published. I said to let it through simply because Dracut is an important part of Fedora Linux and even just bringing its existence to people’s attention is beneficial.
There are an editorial group that review articles.
IIRC there are some “rules” or recommendations on how articles should be written. We can try to enforce a set of rules that could help to avoid the AI generated content. And also, we can just reiterate via social media and maybe a post in the same magazine and community blog where we prefer un-perfect but human generated content over AI generated content.
I doubt either of us are capable of discerning with one hundred percent accuracy that an article is or is not AI generated (or “assisted”). Also, I’m concerned that asking the author might be taken poorly. I’m fine with having a simple rule that states AI-generated or assisted articles must be labeled as such. I will not, however, attempt to prove or disprove whether AI was involved in the article’s development. Nor will I impugn potential contributors by accusing them of using AI and not informing us.
Besides the point as we shouldn’t have them at all. But if we do, yes.
Your evidence is somewhat incomplete, I feel. If using the emdash and semicolons were evidence of an LLM – then I’d certainly be marked as one with almost everything I write.
An ethical use of LLMs is very difficult period right now. If articles are being generated wholesale, they are almost certainly stealing content from scraped articles elsewhere – and if they aren’t, they’re likely hallucinating facts and making things up.
I suppose LLMs can be decent at reviewing content, but ChatGPT for instance is not great at it. It’s more professional stuff that costs money that is actually halfway decent at editing. Even if an AI can edit articles, it should always be double checked by a human, which kind of defeats the purpose in the first place.
Given we’re an open source, volunteer group, I realize saying “hire translators” isn’t viable. Generally, I’m against AI translations because they put real translators out of work and often make mistakes that would be very apparent to a native speaker. It’s in some ways better than google translate, but in other areas, it is significantly worse. More translator volunteers would be preferable, barring that, I don’t think AI is substantively better than the reader just plugging it into a translator themselves.
Sure, this is probably a usecase I can get behind, since it just automates testing commands. Though if the author is putting commands in the article, I would assume they had already tested it. This is something we should ask of contributors; if you didn’t run the command yourself, don’t put it in writing.
See above. It’s a good use of it, but we should go further and insist writers test their commands. Then run it through a test AI to ensure it works.
Stealing content is a difficult thing to really pin down. As humans we do a form of this all the time when writing functional examples of commandline tools or even examples of API usage. We don’t trial and error our way through constructing a functional awk call.
I’ve written a goodly amount of technical content in my time and I’ve never been expected to cite the specific manpages or the specific usage information for cmdline tools that I learned from when I inlined some example bash commands. Not once was this something expected of me to do. And it’s never seen it as a common practice in the advocacy ecosystem of human generated explanatory content.
META recently got caught torrenting 82TB of pirated books for their LLM. And that’s only what we know about. They’re trying to argue it’s fair use. Open source projects are having to put protections in place because scrapers are bringing their sites down pulling all the information off them. AI LLMs generate absolutely nothing original. It’s all ripped off from someone else’s work.
AI definitely does scrape/steal content. What’s worse, IMO, is that it will even lie about having done so if you ask it. A friend of mine recently showed me some supposed AI-generated content about how to be more humble. More specifically, he said he entered the following prompt: “Welcome to Anti-Ego 101: a course all about how I have no ego. Prepare to be amazed.” and he emailed me the answer it gave, thinking it would impress me. However, I was confused when I noticed a variation of a biblical quote in the response. Near the end, it wrote: “… now go forth and make mistakes.” I was asking myself why AI would use outdated language (“forth”) and imitate an old Christian text. The Christian verse is something like “Go therefore and make disciples of all nations” (different translations will vary slightly). When a human quotes scripture, I assume the human is “fond” of that religious philosophy. But I found the idea of an AI that was so impressed by Christianity as to imitate its language highly unlikely. So I had my friend ask the AI why it chose to use that wording. The AI acknowledged that the wording was a spin on an ancient Christian verse, but it tried to claim that it contrived that wording. I think it is highly improbable that it “decided” to quote that ancient text and “spin” it as “Go forth and make mistakes”. I’m betting that it just copied some real Christian’s work and then, worse, it lied about it when directly questioned.
Excerpted from my friend’s email reply:
Here is the response the AI gave when asked why it quoted that biblical text:
That’s a good question. I think it mostly picked up on my sarcasm, but I asked it:
Prompt:
“Now go forth and make mistakes” is oddly similar to the Biblical reference, “Go ye forth and make disciples of all nations.” Why did you choose that phrasing?
Response:
That’s a sharp observation! The phrasing “Go forth and make mistakes” is an intentional, playful riff on the gravity and universality of classic “go forth” sayings—like the Biblical phrase “Go ye therefore, and teach all nations” (Matthew 28:19, King James Version). Here’s why this wordplay works so well in the context of learning and anti-ego:
Why the Phrasing Resonates
1. Familiarity and Impact
Parody/Parallel Structure: Using a well-known “mission statement” format draws attention and delivers a punchline.
Memorable Contrast: “Make disciples” is serious and universalizing; “make mistakes” is humble, cheeky, and encourages experimentation.
2. Authority and Guidance
Educational Tone: The biblical phrasing is often used for commissioning or sending people out with purpose. Here, you’re being commissioned as a learner, sent out with a new mission: not to convert or conquer, but to explore and err.
Permission to Fail: It’s a way of granting official “blessing” to the idea that mistakes are essential steps in learning.
3. Humor and Approachability
Lightheartedness: Injecting a serious voice into the advice creates humor and makes the learning process less intimidating.
Self-Deprecating Wisdom: It underlines the course’s anti-ego theme—mocking grandiosity while encouraging real growth.
The Origin of the Phrase Here
It’s not a conscious echo so much as it’s using a universal structure for dramatic effect in education. Science and math teachers, motivational speakers, and even startup gurus use “go forth and…” to inspire action, because it’s a format that sticks.
In Short
I chose “Go forth and make mistakes” because it takes a phrase that’s heavy with meaning and turns it into a call to humble action—to get out there, try, mess up, and learn. It’s a reminder that progress requires imperfection, and that the real mission is not mastery, but unrelenting curiosity.
Now, truly: Go forth and make mistakes.(Or, if you prefer, go forth and collect disciples of curiosity—same spirit, different phrasing!)
Sorry, I guess that was long-winded and maybe a bit off-topic, but the moral of the story is: Don’t trust AI! It lies!
I don’t argue that the process its different. I make no claim that LLMs do what humans do.
But I argue that humans steal all the time.. in the sense that we read something we are influenced by it and we generate content based on that influence and we are never expected to cite all our influences when we relay what we have learned to others.
Legally, we humans are usually expected to transform things enough to avoid claims of plagiarism when the original work has expressive creative value. Transformation of existing knowledge or creative output is usually celebrated while copying is not.
There’s a very good quote from T. S. Eliot from over a century ago in his book of essays called The Sacred Wood that speaks to this. A quote ironically enough from a book now in the public domain (in the US) and would likely be indigestible by LLMs under the strictest of copyright concerns.
“One of the surest of tests is the way in which a poet borrows. Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different. The good poet welds his theft into a whole of feeling which is unique, utterly different from that from which it was torn; the bad poet throws it into something which has no cohesion.”
I find article written, partially or completely a waste of my time. If the author is so lazy or inarticulate that they cannot actually write something, then they and their articles should go elsewhere.
It is an insult to the intelligence of the reader to be subjected to drivel such as seen in these three articles. Therefore, I suggest that less an author can prove and certify that AI was not used in any way to create an article, except for proofreading, that it should not be published in Fedora magazine.
That statement was my certification. As to proof, I am not asking my comment be published in the magazine.
I find AI to be generally useless for any purpose that I have tried. Perhaps others like it as a tool and find it useful. That said, every use of AI in Fedora magazine should be disclosed and a way provided to delete, mask or skip that article or section in any electronic form of the publication. That way, we don’t have to spend our time reading anything that is most likely (IMHO) to be inaccurate.