It drives me nuts when people say, "It's so easy to tell when something is written by an AI." What they really mean is that when they read something and think it was written by an AI, it was. Meaning they have few false positives. They are typically blind to the false negatives.
An intern for Matt Yglesias asked her professors if she could turn in half her papers written by AI and have them graded and at the end of the semester, she would reveal which were which. Several agreed. She turned in 100% AI papers. She get a B- average for the semester at Harvard.
It turns out that if you mix up the AI/human labels and have people read stories, they are somewhat less engaged in AI content, but they find it as persuasive. So for fiction, AI isn't there yet. But as soon as you tell people that something was written by AI, they don't like it, even if it was written by a human.
While people generally rated AI stories as just as persuasive as their human-authored counterparts, the computer-written stories were not as good as transporting people into the world of the narrative.
“AI does not write like a master writer. That’s probably good news for people like Hollywood screenwriters—for now,” Chu says.
https://www.futurity.org/artificial-intelligence-writing-stories-3255142/In other AI news, Seth Godin just released an audiobook. He read it himself because he felt the AI was not *quite* there yet with intonation and emphasis. He thinks that in one year, audiobooks will only be read by humans when particularly talented readers take on a book.
https://seths.blog/2024/10/thoughts-on-audiobooks/