Thank you for sharing your opinion @eturnerx. I appreciate it a lot :)
One potential solution is to put the detector AI behind a paid API - a penny or two per detection request might be sufficient to raise the cost of adversarial probing to the point most attackers won't bother.
I totally agree with you, but I suppose GLTR is rather an experiment than a serious tool. As I mentioned, it only increases a chance to spot the fake text, and I believe that in about 6-12 months machine-generated texts will be so advanced that GLTR will become helpless. And unfortunately I see no space for further development and improvements of this tool.
An an amusing annecdote: some years back a colleague and I annoyed somebody at a trade fair by adversarially attacking their face detection/tracking system. We managed to fool it with two circles and a line hand drawn on piece paper. Systems are much better now, but will still have vulnerabilities.
That's really interesting. As far as I know, till the premiere of iPhone X face recognition systems were generally easy to trick with a photo. Ultimately Apple introduced their special sensor able to recognize face in 3 dimensions, thus becoming resistant to the images. However I think you can still convince it by using some precise mask.
I think it's an arms race. While GLTR might not be the best detector in future, something else will be.
I agree, but the concept of this tool will have to be reworked.