We find two positive effects on discrimination accuracy: gender, specifically “non-binary/third gender” (b = 0.169080, SE = 0.030607, z = 5.524, p < 0.0001), and having seen any of the poems before (b = 0.060356, SE = 0.016726, z = 3.608, p = 0.000309). These effects are very small; having seen poems before only increases the odds of a correct answer by 6% (OR = 1.062). These findings suggest that experience with poetry did not improve discrimination performance unless that experience allowed them to recognize the specific poems used in the study. In summary, Study 1 showed that human-out-of-the-loop AI-generated poetry is judged to be human-written more often than poetry written by actual human poets, and that experience with poetry does not improve discrimination performance.
You are viewing a single comment's thread from: