As I continue to learn more about AI, there is something that puzzles me and I would like to hear from other people in this forum.
If my understanding of how AI works is correct, it seems that it results impossible to find out when something has been generated with AI.
For example, if we ask ChatGPT to write an essay about "classic music", we will get a different answer everytime. Therefore, it may result impossible to know that an essay has been generated with AI.
Is this correct? Can we create a program that helps us identify whether something (i.e. an essay) has been created using AI?
I guess the answer is no (because if the answer were yes, I would imagine that universities would pay money to have this program and find out who are the lazy students).
AI detection tools like GPTZero are useful but far from perfect. At CuriousAI.net, we love putting AI to the test—so here’s a challenge for all of you!
The GPTZero Challenge
Write two short texts (2-3 paragraphs each) on the same topic with a similar structure:
1️⃣ The first text should be written entirely by you, with no AI assistance.
2️⃣ The second text should be generated 100% by ChatGPT.
Then, run both texts through GPTZero and see what happens! Will it correctly identify the AI-written piece? Or will it get confused?
Try it in English or Spanish and share your results! Let's see how well these tools really work.
This is a very interesting question! While it is true that AI-generated text changes every time, there are still ways to detect whether something was written by AI.
One approach is using AI detection tools. Some popular tools include GPTZero and Turnitin’s AI detection. These tools analyze text based on patterns, sentence structure, and word choices to estimate the likelihood that AI created it. However, they are not 100% reliable. For example, GPTZero claims to detect AI writing by looking at "perplexity" (how unpredictable the text is) and "burstiness" (variation in sentence structure). But it can sometimes label human-written text as AI-generated (false positives) or fail to detect AI content (false negatives).
Another way is by examining writing style. AI often produces well-structured but generic and repetitive content. It might lack deep personal insights, unique experiences, or emotions that a human writer would naturally include. For example, a teacher might notice that a student's in-class writing is very different from a perfectly polished essay submitted online. This is already happening—some universities and high schools have reported cases where students admitted to using ChatGPT after being confronted about unusual writing styles.
However, detecting AI-generated content is becoming harder as AI improves. New models can mimic human writing styles more accurately. This is why some universities are shifting towards oral exams, live writing sessions, or asking students to explain their essays in person to verify their understanding.
So, while it’s difficult to be 100% sure if something was written by AI, some methods—like AI detection tools and writing comparisons—can help. But as AI evolves, detection methods will also need to improve!
What do you think? Should schools and universities rely on AI detection tools, or should they find new ways to assess students' work?