AI’s Dirty Little Secret: Stanford Researchers Expose Flaws in Text Detectors
In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. Researchers warn that unreliable performance of AI text-detection programs could adversely affect many individuals, including students and job applicants.
https://scitechdaily.com/ais-dirty-little-secret-stanford-researchers-expose-flaws-in-text-detectors/ #AI #text #fraud #detection #NonNative
@Nonog even in a country where English is the main language (or a well recognised second language) there are numerous regional dialects and variants - I expect these detectors would false trigger on those (and worse, any method to prevent this could also be weaponised to "discriminate by stealth" especially as there are often multiple online stages before someone is selected for a job interview)
GPT detectors are biased against non-native English writers
https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7?_returnURL=https://linkinghub.elsevier.com/retrieve/pii/S2666389923001307?showall=true