AI in Academia: A Tool, a Threat, or a Mirror?

Jake Van Clief

Prompt Engineer
Creative Writer
Journalist
In response to a recent article addressing the ethics and potential pitfalls of AI involvement in academic pursuits, particularly the writing of a Ph.D. thesis, I'd like to offer a contrasting perspective from the response left by various WSJ readers
I'd like to support that, not only should we allow A.I. in the field of thesis creation, but it will likely be a 'soft' requirement in academia as time progresses, much in the same way access to digital libraries like SAGE or JSTOR and even google are a staple tool in a researcher's belt in modern days.
Firstly, let’s consider the exponential growth and accuracy that AI models have achieved in recent years. These models are not just stringing together random words but curating information and providing insights on data that edge on the side of robust critical thought.
Many detractors of A.I. spout that it is largely inaccurate and incapable of anything more than basic writing.
However, with the growing black box problem in statistical computing, it is becoming harder to defend that these AI models are simply just "guessing the next word."
I have become more convinced of this fact during my recent work and research over the last year while using AI in academia. Infact, I used a few AI models to help me significantly in one of my most recent papers which was accepted for publishing by the Saints Academic Review.
I made it clear how, when, and why I was using different models in the introduction of my paper. This as well as the content of my paper were satisfactory for the peer-review board to accept the paper for publishing.
You can read the un-published version here:
_Rise of the Virtual Vanguard_ A Treatise on Post-Digital Governance (1)
.pdf
Download PDF • 379KB
Now the author of the aforementioned articles main issue was with AI almost entirely writing a thesis. This argument is plenty sound on the surface, however the actual execution of this is more blurred than they may realize, let me explain.
I've scanned through numerous AI-generated papers and have been hands-on in crafting many myself.
Through this experience, I've observed a pivotal distinction: the most profound and high-caliber AI outputs are not the result of casual or arbitrary prompts. Instead, they emerge from intricate, well-considered questions that necessitate an in-depth understanding of the subject matter.
In many instances, crafting these precise and nuanced prompts demands a level of critical thinking and expertise that highlight the ability of the author more than the writing itself. The value lies not just in the AI's response but in the human intellect's ability to question, probe, and guide the AI to produce meaningful content.
Similarly, those who reach the doctoral level are not generally looking for shortcuts without good reason. Many (hopefully all) of these individuals are deeply passionate about their field and are seeking to contribute meaningfully to it. If AI can aid in that process, offering more precise data analysis or helping to structure complex arguments, why should it be dismissed? This is not about replacing human effort but augmenting it.
Again, I do not want to dismiss the authors original points, as I believe they come from solid ground, however, I find my own perspective conflicts with some of their thoughts.
One particular statement from the original article stood out:

"Plus it completely devalues my years of hard work and my degree."

While this sentiment is understandable, it seems to underscore a larger issue: the deep and profound strength of 'back in my day' syndrome.
I mean this with only the slightest twinge of playful mockery.
This perspective creates a problem, it tends to view newer methodologies or technologies as inferior or 'easier' than traditional methods, often downplaying the potential benefits they bring.
Further, this statement seems to highlight not a focus on the greater good for academia but the insecure fear that comes from the worry of being "outdone" or replaced". Again, a completely reasonable fear; however, one that will not likely come true.
This mindset leads into further thoughts in which one response explains:

"Thousands of people over the course of centuries have poured their hearts and souls into producing an original piece of scholarship that would mean something to them personally and to the wider academic community."

The author here seems to argue here that a student using a chatbot to output pages of writing and revising it, is taking the centuries of work done before and somehow making it less valuable.
In the realm of art and culture, questions often arise about the relationship between originality and replication. Consider, for instance, the vast number of artists who have been inspired by and have recreated the techniques or works of past luminaries such as Vincent Van Gogh or Rembrandt. Does the act of replication or reinterpretation diminish the value of the original works?
Today's engineers leverage modern technology, such as cranes and advanced construction techniques, to build marvels that dwarf many ancient structures in terms of sheer scale and complexity. Does this modern advancement render the architectural feats of our ancestors constructed with rudimentary tools and manual labor less significant?
Before the advent of the printing press, manuscripts had to be painstakingly copied by hand. The printing press revolutionized the way information was disseminated, making knowledge more accessible to the masses. Did it devalue the work of scribes? Or did it simply change the way we produce and consume written content?
The crux of this debate lies in our understanding of value. Value, in many contexts, is not an intrinsic property but is rather conferred upon objects or ideas by their perceivers. Whether it's a piece of art, a monumental structure, or a groundbreaking treatise, its value is largely determined by the appreciation and significance ascribed to it by its audience.
Replication, reinterpretation, or advancement in technique does not inherently devalue original works. Instead, it offers a testament to their enduring influence and the foundational role they play in shaping subsequent generations of thought and creativity.
The true measure of value lies not in the novelty of a creation but in its ability to resonate with and be cherished by its audience.
Further, the merit of a piece of work isn't solely dependent on its author but fundamentally on the content and insights it presents.
Which leads me into my argument on why A.I. not only should be accepted in academic work, but it will become a standard and will change our current baselines.

Thought Experiment: The A.I. PhD.

Picture a review board evaluating a student's doctoral dissertation that is made entirely from AI.
For the sake of this thought, let's assume the board is unaware that they're evaluating AI-generated content.
Upon thorough review, if the board finds the AI-generated dissertation to be of exceptional quality, endorsing it as meeting or even exceeding the standards of academic rigor, what does this tell us about our established academic systems?
It raises some questions we most certainly must ask ourselves:
Validity of Current Standards: If an AI can successfully navigate the stringent criteria set by review boards, does it suggest that our benchmarks for Ph.D. qualification might need reevaluation? Could it hint at a standardized, potentially monotonous nature of academic scrutiny that AI can decode?
The Essence of a Ph.D.: Traditionally, a Ph.D. is an emblem of originality, exhaustive research, and a deep understanding of a subject. If AI can emulate this, do we need to reassess what constitutes original thought and effort in academia?
Potential Inadequacies: Does this reveal latent inadequacies in the Ph.D. process? Could there already be inherent loopholes or a lack of challenge that AI can exploit, or does it highlight the model's capability to simulate human-like intellectual prowess?
Pair this with the rapid advancements in artificial intelligence, a trajectory where AI might not just assist but lead research, producing insights and theories that could potentially rival human-generated content in depth and innovation begins to work its way on to the table.
On the other side of the spectrum, if AI-generated content consistently falls short of meeting the benchmarks set by academic boards, this too offers insights.
If AI outputs require extensive human intervention to be deemed acceptable, it reinforces the irreplaceable value of human intellect, creativity, and nuanced understanding. In this scenario, AI becomes a mere tool, akin to a preliminary draft or a brainstorming session, which needs refinement, depth, and a personal touch to transform into a substantial academic piece.
As we stand at this juncture, it's crucial to remain open to the idea that, in the not-so-distant future, AI could emerge as a source of authority in academic and research circles. This comes with its own major risks and challenges.
Many of which myself and other academics dive into through both courses and forums within my online community dedicated to the understanding and the ethical use of technology within the world of education.
If you made it to the bottom of this article and are wanting to contribute to the discourse, you can find us at the Quantum Quill Lyceum.
Partner With Jake
View Services

More Projects by Jake