The flatness of AI writing
May. 1st, 2026 07:58 amEthical use of AI is a huge topic right now. Is there any ethical use of large language models (LLMs)? If so, which uses are ethical? Were all of the current models unethically sourced? Before LLMs, would you have paid a human writer to do this work? But even if none of above were a problem, when you have AI right for you, you have AI outputs. And I agree with whoever said "If you didn't bother to write it, I'm not going to bother to read it."
Nonetheless, I have accidentally read quite a lot of AI writing, because it's filled up my FB feed, and it all seems to be junk. Junk in the sense that it doesn't feel mentally healthy to read it. There are quite a lot of different types of AI writing, and the one that my friends tend to share most often is the uplifting historical story. These stories are very formulaic, and are both very boring and quite painfully long. And of course, when you look up the real history, they all get at least a few details wrong. But, maybe more crucially, they flatten so many different personal stories into one story. Don't get me wrong: I think it's crucial to tell positive stories. But I also think it's crucial to tell factual stories, and to tell diverse stories. Not all histories start in oppression, hinge on the brave actions of a single person, and end with triumph. History, and people, are way more complicated than that. Reality matters, and uplifting stories which are all the same story, told over and over again, flattens us as humans.