I just saw this slashdog article about how AI is creating more work for those “on the receiving end” and remembered that this had happened to me recently here. I didn’t waste too much of my time reading the whole regurgitation because I was able to recognize it pretty quickly. However, such content might not always be so easy to recognize.
I feel that it is wrong to burden people with reading AI-generated content that has such low information value and which is ultimately pointless in the sense that the “author” isn’t even trying to persuad anyone of anything (If they were genuinely interested in making a case for or against something, they would use their own logical statements; they would not trust AI to do the persuading).
Should we flag these AI-generated posts so they don’t eat up people’s time? Or perhaps a compromise would be to mark them with a special emoji so other readers can know beforehand that they aren’t likely to get anything from reading the block of text?