Grammarly’s AI “Experts” Now Include Dead Professors

Jetpack 0

A new manuscript review tool modeled on real scholars is drawing outrage across academia and raising questions about consent, copyright and digital identity. Some of those scholars are no longer alive, prompting critics to accuse the company of practicing “digital necromancy.”

Grammarly’s AI “Experts” Now Include Dead Professors

As artificial intelligence rapidly moves into classrooms and research labs, a new debate is unfolding over how far technology companies should go in recreating the voices and authority of real scholars.

Grammarly, the writing assistance company used widely by students and academics, is facing criticism after users discovered that one of its newest tools allows manuscripts to be reviewed by artificial intelligence versions of well known scholars, including some who have died.

The controversy erupted this week after a historian noticed that the feature appeared to include a simulation of David Abulafia, a prominent historian who died in January.

The discovery quickly spread through academic circles online, where scholars questioned both the ethics and legality of the practice.

Grammarly introduced the feature, called “Expert Review,” as part of a suite of generative AI tools aimed at helping writers refine academic work. The company says the tool can help users “meet the expectations of your discipline and your project by drawing on insights from subject matter experts and trusted publications.” To use it, writers open a document in Grammarly’s AI platform, select an expert and receive suggestions modeled on that scholar’s research and writing.

The system can also rewrite passages of a paper according to the recommendations.

“Revise the draft yourself or let Expert Review rework things for you,” Grammarly’s website claims.

The backlash began when Verena Krebs, a medieval historian at Ruhr University Bochum, shared a screenshot on Sunday showing the tool offering Abulafia as one of the available “experts” who could review a manuscript.

Since Abulafia died earlier this year, scholars questioned whether his name and work were being used without permission.

“Grammarly is now offering ‘expert review’ of your work by living and dead academics,” Vanessa Heggie, an associate professor at University of Birmingham, wrote in a LinkedIn post. “Without anyone’s explicit permission it’s creating little LLMs based on their scraped work and using their names and reputation.”

The idea that a software platform could generate feedback under the names of real scholars struck many academics as unsettling, particularly when those scholars could not consent.

“I have seen a lot of cursed stuff in my time in academia but this is among the most cursed,” Claire E. Aubin wrote in a post on the social media platform Bluesky that quickly spread across academic networks.

Others used even sharper language.

“This is literally digital necromancy,” wrote Kathleen Alves in a Bluesky post.

“NecromancerLLM,” echoed Hisham Zerriffi, an associate professor at University of British Columbia. “Seriously, dead or alive, this is just wrong.”

The episode reflects a broader struggle unfolding across universities as generative AI systems become embedded in research, writing and teaching. Large language models are trained on vast collections of books, academic papers and web pages, often without explicit permission from the authors whose work becomes part of the training data.

For scholars, the concern is not only that their research is being used to train AI systems but that their names and reputations may now be attached to machine generated advice.

Grammarly’s new tool appears to draw on publicly available publications and academic writing to model how particular scholars might review a manuscript in their field. But critics argue that presenting the feedback as coming from a named expert crosses an ethical line, especially when the scholars themselves have not agreed to participate.

The feature is not the only one raising questions. Grammarly has also introduced what it calls an “AI grader agent,” designed to help students predict how their work might be evaluated. The tool generates feedback by searching “publicly available instructor information” about a student’s teacher or professor.

To some educators, that approach risks turning academic assessment into a predictive game in which students tailor their writing not to improve ideas but to satisfy an algorithm’s guess about a professor’s preferences.

The controversy comes at a time when universities are still struggling to define clear rules for artificial intelligence in academic work. Many institutions allow limited AI assistance for editing and grammar but prohibit using the technology to generate entire essays or research papers.

Yet the rapid pace of development by companies like Grammarly has made it difficult for academic policies to keep up.

For critics, the deeper concern is what happens when the authority of scholarship itself becomes something that can be simulated by software.

A review written by a colleague has long been one of the foundations of academic research. When that voice is replaced by an algorithm speaking in the name of a real person, some scholars worry that the line between genuine expertise and artificial imitation begins to blur.

Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide


Discover more from Impact AI News

Subscribe to get the latest posts sent to your email.

Scroll to Top

Discover more from Impact AI News

Subscribe now to keep reading and get access to the full archive.

Continue reading