Who Really Wrote This? The Essay, AI, and the Complexity of Authenticity
As AI enters the college admissions process, it’s forcing us to ask what the essay is really measuring and whether we’ve ever been honest about it.
Is It All AI Now?
Lately when I talk with other educators about AI and college admissions, I often
get some version of the same question: “You must be seeing a lot of AI-generated college admissions essays now, right?” Sometimes it’s phrased with concern, other times with curiosity, but the assumption is the same, that AI has flooded the system with inauthentic writing and made our jobs as application readers harder. There’s often a follow-up about AI detection tools and what we do when we suspect a student didn’t write their own essay.
But here’s the thing: we’ve never really known who writes a college essay. Not before AI, and not now. We didn’t have a way to verify whether a parent, teacher, or private counselor shaped a student’s essay too much and we weren’t trying to. We actually actively encourage students to get help in the writing process. We expect the final product to be polished and edited. AI isn’t changing this expectation. What it has done is make that kind of help more accessible. It didn’t create the authenticity problem. It has just made it impossible to keep pretending it didn’t exist.
The Polished Essay Was Always a Product of Help
The college essay has long been framed as a deeply personal piece of the application, a space where students can share their story in their own voice. But we’ve also made it abundantly clear to students that they shouldn’t write it alone.
Most high schools, especially in college-going communities, offer built-in guidance: counselors, workshops, summer bootcamps, and feedback sessions. As someone who works in admissions, I’m regularly invited to join essay writing sessions hosted by high schools throughout the Bay Area. These are structured, resource-rich environments that explicitly teach students how to write in the genre of the college essay, often with multiple rounds of editing and review.
Parents, too, are active participants. Many see it as their role to help, and many more turn to paid professionals. Where I live, Independent Educational Consultants aren’t the exception, they’re the norm for middle- and upper-middle-class families, even those attending private schools with excellent college counseling offices.
We’ve built an entire industry around helping students craft the “perfect” college essay, while still insisting it be “authentic.” We’ve quietly agreed that authenticity can survive a dozen drafts, a parent’s rewrite, or $10,000 in coaching, but when a student uses a free AI tool, suddenly the essay’s integrity is in question.
The real problem isn’t that AI has tainted some ideal of pure student writing. It’s that the ideal never existed to begin with. We've been asking essays to measure something — raw, personal, unfiltered voice — that we've systematically coached, edited, and professionalized out of the process.
AI hasn’t broken the system. It’s simply made it harder to pretend the system was ever clean in the first place.
Why AI Help Feels Different
Students have always received help on their college essays, from parents, teachers, school counselors, and paid consultants. We've built a system where this support is not only accepted but expected. What’s more, we've often celebrated it. There’s something reassuring, even virtuous, about the image of a trusted adult working closely with a student to help them find their voice.
So why is AI help treated so differently?
Part of it is access. AI removes the gatekeepers. Students no longer need to pay thousands of dollars for high-quality writing support. But part of it is something deeper: AI help feels uncontrolled. When a student gets guidance from a parent or coach, we can imagine that guidance. We trust it has a human logic, a discernible origin. But when ChatGPT generates a paragraph of text, it feels like it’s coming from nowhere and everywhere at once. The writing appears suddenly, without context, and that feels untrustworthy, even inauthentic.
There’s a kind of comfort in paying for a credentialed adult to spend time coaching your child on an essay. Even when the help is extensive, it still feels human. AI, on the other hand, can mimic that coaching in many ways, but it comes without a face, without a relationship, and without any clear norms for how it should be used. That absence of clarity is what feels most unsettling.
Right now, students are navigating this space largely on their own. There’s very little guidance on how AI can be used ethically and still result in something that feels like a student’s authentic work. Among educators, opinions vary widely. Some are deeply concerned and eager to address the implications of AI in college essays, while others believe their students are not using these tools in significant ways and feel no urgency to respond. As a result, there is no shared framework, no common message, and no clear understanding of how this technology should fit into the admissions process. Students are left to interpret inconsistent signals and figure out the boundaries on their own.
That’s why tools like esai.ai and athenaco.ai are beginning to fill a necessary gap. They offer structure, reflection, and guardrails that aim to support the college essay writing process without replacing the student’s ideas or voice. But we’re still early. Until there is broader consensus on what responsible use looks like, AI will continue to feel like a shortcut for some and a threat to authenticity for others.
Admissions Reality: The System Isn’t Built to Verify
The truth is, colleges are not equipped to investigate how an essay was written. There is no authentication process, no authorship review, and no meaningful mechanism for distinguishing between a polished student draft and a heavily assisted one. In most admissions offices, the essay is read and evaluated at face value, often in less than three minutes. It is one part of a broader application, and while it can offer insight into a student’s experiences and perspective, it has never been a verified document. In practice, I think it tells us very little about a student’s actual writing ability. What it reflects more clearly is the extent to which a student had access to the tools, feedback, and support necessary to produce a well-crafted college essay.
We have long operated on trust. Trust that the student wrote the essay, or at least that they had a hand in shaping it. Trust that the guidance they received was within acceptable bounds, even if no one defines exactly what those bounds are. In reality, students have always arrived at the final product through vastly different paths, with varying degrees and types of support. And yet, we have treated those essays as if they were produced in a uniform way.
The arrival of AI doesn’t change the system as much as it reveals how little the system was designed to handle questions of authorship in the first place. Institutions are not using AI detection tools, because they do not work. Even if a tool could accurately suggest that a portion of an essay was AI-generated, there is no policy framework in admissions offices for how to act on that information.
Unless institutions are prepared to radically change the role of the essay, by replacing it, restructuring it, or developing new systems to evaluate its origin, it is unlikely that AI will fundamentally alter how essays are reviewed. The process is not built to monitor inputs. It is built to read what is submitted and make the best judgment possible, given the context available.
The Counselor and Educator Dilemma
Many educators are still trying to figure out how AI fits into the college admissions process. Some have responded by banning it entirely. Others believe their students are not using it in any meaningful way and feel no urgency to address it. Most are unsure what guidance to give, how colleges are handling AI-generated content, or where to draw ethical lines.
This lack of clarity has left students caught in the middle. They are told to be authentic, but no one is explaining what that looks like when AI is part of the writing process. Some students avoid AI altogether because they are afraid of getting it wrong. Others rely on it heavily, unsure how much use is too much. Without clear expectations, students are left to make decisions in isolation.
We need to shift from uncertainty to shared responsibility. That begins with practical, transparent steps that both educators and institutions can take.
What could help:
Clear guidance from university admission offices
Institutions should explain how they view AI in the application process. Is it acceptable for brainstorming, editing, or feedback? What uses cross the line? Check out these university policies relating to AI use in the admission process.Definitions of “authenticity” that reflect reality
Students need more than vague encouragement to be themselves. Schools and colleges should clarify what makes an essay feel personal, even when tools are involved.School-based guidelines on AI use
High schools and counseling offices can give students clear expectations for appropriate AI use, just as they do with academic integrity policies.Teaching ethical AI use
Students should learn how to use AI thoughtfully. That includes knowing how to ask the right questions, reflect on suggestions, and take ownership of what they submit.Better coordination between high schools and colleges
When counselors and admissions offices are aligned, students receive more consistent and supportive messages. Open communication can reduce confusion and build trust.
Until we offer students consistent, thoughtful guidance, AI will continue to be a source of anxiety and mixed signals. The technology is moving quickly, but our response can still be deliberate and student-centered.
What Are We Actually Measuring?
The growing presence of AI in the college application process has surfaced a question we should have been asking all along. What is the personal essay really meant to show?
If a student can use AI to produce an essay that reads as thoughtful and compelling, does that mean the essay has lost its value? Or does it mean that we were never quite sure what we were evaluating in the first place?
The problem is not AI. The problem is that we have never clearly defined what kinds of support are acceptable, or what we expect an “authentic” essay to look like. We have allowed an uneven system of help to thrive in the background while telling students that the essay is a personal and honest reflection of who they are.
Until we create shared expectations for how tools like AI can be used, students will continue to experiment, optimize, and second-guess what is allowed. Colleges will continue to read essays without knowing how they were created. And we will all keep pretending that the playing field is more level than it actually is.
If we want to preserve the value of the essay, we need to be more honest about what it is actually measuring and what kinds of support are fair. Otherwise, we risk turning a tool with the potential to expand access into yet another symbol of the lack of transparency and equity in the college admission process.