AI is easy to fool. Don't let the people yelling about the fear of AIs fool you. AIs are good for specific patterns. They'd work for a bit, and then people would move onto some other technique. If you tried to counter it with AIs you'd likely end up with a lot of innocent people getting flagged improperly by the AI. We just need these great community detectives we already have to keep doing the good job they are. We need to continue to give them our support.
You are viewing a single comment's thread from:
"AI is easy to fool. Don't let the people yelling about the fear of AIs fool you."
This does not seem relevant to anything I said.
Anyway, I would like the bots to spot patterns and connections in data, not do the flagging. The flagging or even banning should be done by the community, once abuse or scamming has been detected. And I´m of course not suggesting that this would be the only method, but it would help.
It only makes sense, if bots are used for scamming, bots should be used to counter it.
Bot's that did like the Cheetah bot and said this MIGHT be plagiarism, or this might be something else would be fine. If they were a tool. Then a human could benefit from the tool. That is useful.