Artificial intelligence (AI) has changed the game for how people approach accessing information.
It can summarise, analyse, and even draw conclusions from thousands of pages of text, images, or anything you like on your behalf. For many university students, AI has become an invisible assistant that is used to outsource tasks, help rationalise thought processes, copy edit and does its best to make you sound like you're an authority on any given subject.
But is leaving it up to a computer to think for you something that university students looking to become adept in their chosen subject should be doing?
AI can now produce polished essays, psychological arguments, and even reflections that mimic personal insight in seconds. It can produce a range of different writing styles and take care of all of your citations for you, which is often one of the suckiest parts of academic writing. It can articulate theories in a way that reads better than many humans could write, so where does that leave people with a thirst for knowledge?
For some students, that might seem like a dream ride, but for people looking to be the best, it can be problematic, especially if you're not producing the thoughts yourself.
In this article, I want to explore how the normalisation of AI in universities is changing the way people think, how it threatens the core of learning itself, and what this means for future professionals in fields that rely on genuine human understanding. I also want to reflect on my own experience as a postgraduate student, where AI use is not only accepted but encouraged, and what that has meant for my own learning and growth.
Learning How to Learn with AI
It's been a long time between drinks for me when it comes to studying. I completed my undergrad degree in Urban and Regional Planning in 2012, graduating with honours. Then, back in 2018, I decided to pursue additional postgraduate studies in Positive Psychology. After a year in, life got in the way, then COVID, and then I eventually swept my study under the rug without a second thought.
It wasn't until this year that I felt the desire to go back and complete my studies, some seven years later. I went into this experience with the mindset I held last time: a lot of readings, critical essays, and long nights wrestling with thoughts so that I could put pen to paper and share my perspective with the world.
But what I wasn't ready for was a new institutional companion called “GenAI.”
In 2025, Australian universities - including my own use GenAI, which forms part of the official study and research process. I still don't know a great deal, but from what I can tell so far is that students can use tools like ChatGPT and Grammarly for summarising, drafting, copyediting, or exploring topics, as long as they disclose it upon submission.
It sounded pretty cool at first because I already have a decent understanding of AI prompts and how to deliver specific work outcomes using the tech, so why would submitting an assignment be any different?
But as I started using it, something didn’t sit right.
Full disclosure: I used AI on all of my assignments to formulate a structure based on my own thoughts and conclusions, which I took from the suggested readings, as well as additional sources, and then I used it to stress test my arguments, check for spelling and APA referencing accuracy.
AI essentially allowed me to produce much faster and more rigorous results that meet the course requirements better than I could ever do by myself, because sometimes, when you get into the weeds of a critical review or other coursework, you can often go down a rabbit hole and end up writing something that doesn't meet all of the marking criteria.
What I noticed was that while my marks are of a high standard, my comprehension and deep knowledge of the course content leave a lot to be desired.
I was learning less about positive psychology and more about how to get AI to produce what I wanted.
It became a kind of game: the better I got at writing prompts, the better my essays became. The learning was no longer about the content. It was about the manipulation of the process. I had effectively become a project manager for my own degree, directing AI to do the intellectual heavy lifting while I edited its tone to sound more like me.
It's left me feeling a bit weird about the experience, if I'm honest. I was always a high academic achiever (it's not something I want to boast about, but it's true). So now, I'm still getting the same results, but I'm not going through the mental anguish that I did my first time around at university.
It feels like I haven't earned the results - even though I still did the work.
From Learning to Managing Prompts
The rise of AI has shifted what it means to study. Instead of learning how to think, students are learning how to be skilled at prompting and guiding a machine to deliver the result for them.
And while the option to not use AI is available, I think you would be hard-pressed to find people who wouldn't use a shortcut if it meant they didn't have to spend days on an assignment when they could spend a matter of hours.
Students now have the option to avoid learning when they can outsource it. Instead of critically thinking through points of view, they are formatting them for an algorithm.
And typing a good prompt can feel like you're ahead of the curve because you're managing the task, but in the long term, it's not the same as critical thinking. Sure, AI can generate arguments, restructure paragraphs, and rewrite coherent reflections in seconds, but ultimately, the student’s role becomes a facilitator of information and diminishes the ability to improve one's intellect.
This creates a false sense that you've aced the coursework. You might produce work that looks intelligent and gives you a good result, but the intelligence is borrowed. You didn't earn it in the same way as if you'd done it the old-fashioned way.
The learning process is supposed to be slow, frustrating, maybe a bit stressful, and deeply reflective thinking becomes transactional. You input data, and it outputs a result. Then you move on to the next thing.
The more we rely on it, the less we rely on ourselves.
The Problem of Accepted Dependence
Universities are at a real crossroads when using AI to learn.
While they teach critical thinking and allow the use of tools that can help you think without going too deep into a topic, it feels like the parameters are unclear for the end user. And the fact that you could produce an entirely AI-written assignment, change a few words and disclose that you used AI to produce the work seems like it will produce a generation of students who will miss out on a real education.
For me, university didn't just teach me the discipline I wanted to learn; it taught me how to think and, more importantly, how to go out and learn something for myself, but in this day and age, this approach undermines the very purpose of higher education. Learning isn't about speed. It is about struggle, confusion, stress, and the eventual realisation that comes from the hard work you put in.
When AI fills those gaps, the student no longer builds the cognitive muscle required to grow.
In my own studies this year, I’ve felt so conflicted about my use of AI, but I wanted to try it and see how it works. But when you can feed AI every instruction from an assignment rubric and have it produce a coherent response in seconds, the line between understanding and convenience blurs.
I don't feel like I've learned as much as I could have, but also within that, I've learned a valuable lesson around how I want to approach my studies moving forward.
AI is seductive because it works. You get the grades. You meet the deadlines. You sound credible. But at the same time, you lose the chaos of making sense of things yourself, which is what real learning is supposed to be.
The Illusion of Original Thought
AI-generated writing feels original, but it is not.
It is predictive. It pieces together patterns from millions of texts to generate what it thinks you want to say. You can literally put in every piece of information you need to achieve a result and then build your work around that. And if AI can do all of that for you, what does that actually teach you?
In academia, originality comes from the friction of opposing ideas and making sense of them in your own way. It comes from struggling with concepts until the penny finally drops. AI strips that away and replaces the struggle with convenience.
When you remove discomfort from people's lives, you take away the very things that help shape a person's character. Resilience, failure and accomplishment are reduced to throw-away words that lose their meaning, and the more this happens, the more people will have concern over their futures.
AI makes you sound smarter while making you think less.
It's not really progress in my view. It's just creating a generation of 'vanilla' thinkers who won't be able to read between the lines and challenge cultural, political and social norms.
Why Students Still Choose it
Students don't use AI because they are lazy; they use it because the system lets them, and ultimately, they are rewarded for it. Assignments pile up, deadlines collide, and life gets in the way, and the pressure to perform and compete is constant. With that as the foundation for students these days, AI isn't a shortcut; it's the only way to survive.
I’ve felt that temptation too, and that's another weak justification I have made for using it in my coursework. Working, studying, and managing life commitments means efficiency becomes the priority - I need to work when I have the time, so AI can help me.
But that efficiency has a cost. When learning becomes about making sure you meet the marking criteria to get the top grade, you lose the deeper satisfaction of figuring something out through your own reasoning.
The education system has quietly encouraged this dependence. When universities publicly integrate GenAI into their learning environments, they're saying that being good at AI is better than depth of knowledge. Then students adapt to that message and become masters of outsourcing rather than critical thinkers.
What This Means for the Future
The danger is no longer that students will cheat using AI because it's already been accepted as the 'norm'. The real problem is that it will make students more and more dependent on the technology, giving them answers rather than discovering solutions by themselves.
And in my field of study, if the next generation of psychologists, counsellors, and wellbeing professionals are trained in an environment where AI does all of the legwork, what kind of practitioners will they become, and more importantly, will people even be needed for this kind of work in the future if people are more comfortable seeking advice from a computer than a human?
For now, you can't ask AI to interpret a client’s emotions or sense what is unsaid in a moment of silence. You cannot outsource empathy. You cannot prompt for intuition. Those qualities come from lived experience, reflection, and the sometimes awkward discomfort of a real human interaction.
When AI becomes the norm in academic settings, it risks producing graduates who can describe empathy but have never practised it, who can analyse human behaviour without understanding their own.
It is both an interesting and scary time for students who crave a real education.
Final Thoughts
AI can support learning when used carefully, but it should never replace the discomfort that real thinking requires.
In my experience, GenAI has been both a blessing and a curse. It has shown me what technology can do, but it has also revealed what I lose when I depend on it too much. I have learned how to use AI effectively, but not necessarily how to think more deeply. I did the work, but I don't feel like I earned the results I have received.
Education was never meant to be easy. It was meant to challenge, frustrate, and transform. It's about the struggle, and that struggle helps develop one's life story. Your character only develops when you engage your own mind, not when you outsource it to a machine.
So the real question for students, educators, and professionals is not whether AI should be allowed. It is what kind of minds we are trying to cultivate. Do we want thinkers or technicians? Learners or managers of information?
If universities continue to blur that line, we may soon find ourselves graduating experts who know how to use the tools in their toolbox but will never know how to think for themselves.
And lastly, will I use AI again for university coursework if I choose to take my studies further? Probably not, but it all depends on what is going on in my life at the time.
Sources
Byung-Chul Han – The Transparency Society
John Sweller – Cognitive Load Theory
Daniel Kahneman – Thinking, Fast and Slow
Martha Nussbaum – Not for Profit: Why Democracy Needs the Humanities
Nicholas Carr – The Shallows: What the Internet Is Doing to Our Brains
Neil Postman – Amusing Ourselves to Death
Sherry Turkle – Reclaiming Conversation
