The creator of ChatGPT, OpenAI, today released a free web-based tool designed to help educators and others determine whether a particular piece of text was written by a human or a machine.
Yes, but: OpenAI cautions that the tool is imperfect and that performance will vary based on how similar the analyzed text is to the script types that OpenAI’s tool was trained on.
- “It contains both false positives and false negatives,” Jan Leike, OpenAI’s alignment manager, told Axios, warning that the new tool should not be used alone to determine authorship. document.
How it works: Users copy a piece of text into a box and the system will assess the likelihood that the text was generated by an AI system.
- It offers a five-point scale of results: very unlikely to have been generated by the AI, unlikely, unclear, possible or probable.
- It performs best on text samples over 1,000 words and in English, with significantly lower performance in other languages. And it doesn’t work to distinguish computer code written by humans from AI.
- That said, OpenAI says the new tool is significantly better than a previous one it released.
The big picture: Concerns are strong, especially in education, with the emergence of powerful tools like ChatGPT. Schools in New York, for example, have banned the technology on their networks.
- Experts are also concerned about the rise in AI-generated misinformation as well as the possibility of bots impersonating humans.
- A number of other companies, organizations, and individuals are working on similar tools to detect AI-generated content.
Between the lines: OpenAI said it is considering other approaches to help people distinguish between AI-generated text and human-created text, such as including watermarks in works produced by its AI systems.