Grammarly removed an AI feature that copied real writers after people complained.
Imagine opening your writing tool… and suddenly getting feedback “from” your favourite author.
Sounds cool, right?
Now imagine that the author never agreed to it.
That’s exactly the controversy that just forced it to pull one of its newest AI features, sparking backlash, legal action, and a bigger conversation about how far AI should go.
The Feature That Went Too Far
Grammarly recently introduced an AI feature called “Expert Review".
On the surface, it sounded impressive.
The tool would give writing suggestions “inspired by” famous writers and experts, including names like Stephen King and Carl Sagan.
The idea?
Make it feel like your writing was being reviewed by some of the greatest minds in history.
But there was one major problem:
Those people never gave permission.
The Backlash Begins
As soon as the feature gained attention, criticism came fast and loud.
Writers and journalists were shocked to see their names and identities being used as AI personas inside a paid product.
One of the most vocal critics was Julia Angwin, an investigative journalist and contributor to The New York Times. She described her reaction as "stunned".
For her, it wasn’t just about technology; it was personal.
Her professional identity, built over years, was suddenly being used as part of a commercial AI feature.
And she wasn’t alone.
Within days, multiple writers joined in, saying their names, credibility, and reputation were being used without consent.
From Criticism to Lawsuit
What started as online backlash quickly turned into something bigger.
A class-action lawsuit was filed in New York against Grammarly and its parent company.
The claim?
The companies allegedly used the identities of hundreds of writers to promote a paid AI feature without permission.
The lawsuit argues this is not just unethical; it may be illegal, especially when someone’s name is used for profit without consent.
Even more striking, the case is already gaining attention, with dozens of people reportedly stepping forward shortly after it was filed.
“This Is not Just AI… It is Impersonation”
One of the biggest concerns raised was simple:
Where do we draw the line between inspiration and impersonation?
Critics say the feature did not just “learn from” writers; it made it seem like those writers were actively giving advice, even when they were not.
Some even mocked the quality of the AI suggestions.
Julia Angwin called the output a “slopperganger” , a mix of AI-generated content and poor imitation.
Her concern?
Not just that her name was used, but that it was used to give bad advice.
Grammarly Responds
As the backlash grew, Grammarly acted quickly.
The company removed the feature entirely and issued a public apology.
The CEO admitted the tool had “misrepresented” expert voices.
He acknowledged the criticism, saying:
“We hear the feedback and recognize we fell short.”
Grammarly also stated that the feature had limited usage and was already being reviewed before the lawsuit was filed.
Still, the damage had been done.
The Bigger Question: Who Owns a Voice?
This situation goes beyond just one feature.
It raises a deeper question about the future of AI:
Can a company use your style, name, or identity without your permission?
In the age of AI, where machines can mimic voices, writing styles, and even personalities, the boundaries are becoming unclear.
And this case might help define them.
What Happens Next?
Grammarly says it plans to redesign the feature, this time with better safeguards and likely more direct involvement from real experts.
But the legal battle is just beginning.
The company has said it will fight the lawsuit, calling the claims “without merit".
Meanwhile, writers and creators are watching closely.
Because what happened here could set a precedent for how AI companies use human identity,
creativity, and reputation in the future.