Case Study

One of the largest global publisher leverages Trinka AI for copyediting-level assessment of language quality

Key Insights
Potential saving of over $500,000 annually
Improved turnaround time and reduced costs
Enhanced the overall quality of assessments

For STM publishers, ensuring impeccable language quality is an absolute necessity. The language editing process is pivotal and must continuously evolve to enhance efficiency.

Collaboratively, we introduced Language Central, an AI-powered solution, for one of the leading international publishers.

Trinka API gives a list of error categories along with their occurrences

stm publisher

Language Central, powered by Trinka AI, assesses research manuscripts for language quality, assigning a language score and an appropriate editing level to it. This enables publishers to seamlessly direct manuscripts to the most suitable editing teams or service providers.

We were asked by one of our esteemed publisher clients to find a solution to improve the quality of copy editing by matching the manuscript to a copy editor with the best skills. Language Central was piloted for 6 months and showed tremendous potential where its ML models accurately predicted the quality of language and therefore the level of intervention needed. This has not only improved the quality of their overall publication quality but has also led to greater efficiency in their process.

Sharad Mittal
CEO - Enago and Trinka AI

STM Publishers

Use Case:

Language Quality Assessment


Grammar Checker API

The Challenge

Before implementing this solution, the publisher would assign the same copy editing level to all articles in a journal, when in reality the language quality could vary greatly from article to article within every journal. Assessing every article in detail is costly, especially when it takes significant manual effort.

  • Uniform editing level ignores language quality differences.
  • Detailed assessment is costly and labor-intensive.

The Solution

We partnered with the publisher to identify areas where we could largely automate language quality assessment, speed up turnaround time, and reduce costs. The ultimate goal was to improve the overall quality of these assessments, while preserving author satisfaction.

  • Since November 2021, 10,000+ articles and 100,000+ pages were assessed.
  • This doubled to 20,000+ articles in 2023.

The Result

Following milestones were achieved by the Publisher in a very short time:

Transitioned from a journal-level to an article-level workflow.

Assigned native English-speaking copy editors to articles needing "high" level editing.

Improved turnaround time by breaking down the process at the article level.

Over $500,000 was saved annually by eliminating redundant copy editing.

Before Language Central
Since Language Central
  • Manual assessment of manuscripts for language quality
  • Copy editing service levels defined for entire journal
  • Fixed turnaround time and cost
  • ML-based language profiler assesses for language quality
  • Copy editing service levels defined by article
  • Faster turnaround time and reduced cost

Built on a convolutional neural network, Language Central leverages deep learning models and linguistically informed rule-based systems. It evaluates content based on sentence structure, parts-of-speech components, text sequences, spellings, and word similarity patterns on a sentence level, aggregating it all to the journal article or book chapter. Language Central has been 3 years in the making, built from our 25 years of copy editing experience and in partnership with Enago's Trinka, which was used as part of Language Central's core.

Shanthi Krishnamoorthy
Head - R&D, TNQ Technologies
In collaboration withtrinka logo

Automate Your Editorial Workflows with Trinka AI