
Our client opted for an AI translation workflow
As usual, a project starts with negotiations, including an estimate for the whole service, rates and required resources. A 60 000 word project in 4 batches takes a team of 2 specialists a full 24 days (minus weekends) for translating and another 15 days to proofread. That’s almost two months of work combined, though with shared translation memory we could have cut it in two by working in harmony as we do. As a vendor agency in this project, our client, who offers an AI related solution, suggested using that, and their end client agreed. I think the choice was obvious, from their perspective.
We started on batch one, each approximately the same size. Each batch resulted an average of 12 % of segments edited – in a project of roughly 10 k segments that means we ended up making up to 1200 necessary changes to the text. After each batch, the prompt was edited to tackle these issues.
An error rate of 12 %
Now an error rate of 12 % would have resulted in a complete rejection if it were a human proofreading project. Key terms and grammatical issues would have convinced us to return the project as it is and ask to replace the translator or to do it ourself. That would also have easily added two weeks to the delivery time.
Regrettably even though the issues were solved, the AI had problems implementing them without introducing new problems. While most problems were consistent over a batch, for each of the 4, a complete study on the pattern that the AI had used to introduce new mistakes was necessary. Basically this means we adapted to 4 different registers and had to forget what we had learned previously. For example, a suggestion to open up one abbrevation could result into all abbrevations disappearing, opening the wrong ones or just making up new words for acronyms. And there’s a catch: we could have always returned the file, ask for an update on the prompt, but the people in charge were not skilled in our target language, and relied on our general feedback. So the risk of introducing new problems was higher than the time we spent learning the register.
And then there’s fatigue
AI translation is optimized for speed and lightness, which causes it to skip rules ones batches extend to certain amounts. It is as if it got tired in the end and didn’t care about grammar, terminology or syntax towards the end. This is a common issue: The larger the data, the more AI gets fatigued and skips the rules.
The key takeaways
What became crucial to the end client was the distinction between 1,5 months and 3 weeks.
In this case, the AI was properly utilized as a linguistic tool, and while there was little transparency to prompts and the clear issue of a foreign native dictating prompts that control Finnish output, I feel like we had a chance to affect the intermediary results too. To an extent.
And more importantly, we were able to decide on the tool, and communicate to provide the best quality. A word of warning is needed, however. This was a highly regulated project in a professional environment, and a text mass edited by the end client with DeepL or Google Translate will not meet the required quality assumptoins.
But when done properly, your translators may be in fact able to work on more projects than we would have in the past.
While the risk is higher, exploring translation services that use artificial intelligence might save some time. It is necessary however to emphasize that it’s important to get help from a specialist agency.
If you want to learn more or hear about any possibilities we can offer to make your localization quicker, reach out to us here.





