In a world where artificial intelligence is often touted as the panacea for all our problems, Grammarly's recent fiasco with its Expert Review feature serves as a stark reminder that technological innovation is not free from ethical consequences. Read more: hijacks experts innovation The idea behind this feature was simple: use the names of journalists and experts to offer writing suggestions. But what seemed like a technological advancement quickly spiraled into a legal and ethical nightmare.
Shishir Mehrotra, CEO of Superhuman, tried to downplay the controversy by claiming that using the names wasn't a form of identity theft. Yet, journalist Julia Angwin, who filed a class-action lawsuit, and many other critics, disagree. They argue that using names without consent for commercial purposes is not only a rights violation but also a blatant exploitation of content creators.
The feature was launched in August 2025 and was swiftly pulled after receiving negative feedback, well before the lawsuit was filed. Mehrotra stated, "We decided to remove it quite quickly. Notably, we decided to remove it while there was still feedback well before there was a lawsuit." This quick decision might be seen as a sign of responsibility, but it also raises the question of why such a feature was launched without thorough ethical consideration.
Read more: takes your faceWith 40 million daily active users, Grammarly has significant reach. Yet, this reach does not justify exploiting human voices without their consent. Nilay Patel, editor-in-chief of The Verge, pointed out that the feature was "so extractive and the value wasn't there." This remark highlights a broader issue in the tech industry: the obsession with extracting value without considering those exploited in the process.
The Grammarly case is not an isolated incident. It fits into a broader context of tensions between AI technology and content creators' rights. As AI continues to encroach on creative industries, questions about ethics and responsibility become increasingly pressing. Tech companies must ask themselves: who really benefits from innovation? And at what cost?
The answer to these questions is not simple. On one hand, AI has the potential to transform industries, improve efficiency, and boost creativity. On the other, it threatens to reduce human creators to mere resources to be exploited. The balance between innovation and ethics is delicate, but it is essential to ensure that technology serves humanity, not the other way around.
Ultimately, the controversy surrounding Grammarly is a harsh reminder that technological innovation must be guided by strong ethical principles. Companies must be held accountable for their actions and ensure that their innovations respect individual rights. Otherwise, we risk creating a world where technology is not in the service of humanity, but rather a tool of exploitation.
As we move forward in this digital age, it is crucial to remember that innovation without ethics is not true innovation. Tech companies must recognize their responsibility towards content creators and ensure that their innovations respect everyone's fundamental rights. Because in the end, AI should not be a tool of exploitation, but a tool for enhancing human life.
Frequently Asked Questions
Q: What was the controversy surrounding Grammarly's Expert Review feature?
The controversy arose when Grammarly used the names of journalists and experts to provide writing suggestions without their consent, leading to accusations of identity theft and exploitation. This prompted a class-action lawsuit filed by journalist Julia Angwin, highlighting ethical concerns in the use of AI.
Q: Why did Grammarly remove the Expert Review feature?
Grammarly decided to remove the Expert Review feature shortly after its launch in August 2025 due to negative feedback from users. CEO Shishir Mehrotra stated that the decision was made quickly, even before the lawsuit was filed, indicating a response to public concern.
Q: What are the ethical implications of using AI to hijack human voices?
The ethical implications include potential rights violations and exploitation of content creators, as using names without consent for commercial purposes raises significant moral questions. Critics argue that such practices reflect a broader issue in the tech industry, where value is often extracted without considering the impact on individuals involved.
