The dream of translating with the help of a machine has persisted for 70 years, but until recently raw machine output didn't was as unnatural as it was precise. Generally, it was easier to retranslate the text than to fix something that couldn't be fixed. The most inexperienced translator could produce raw material that was more editable than the best machine translation. So, machine translation was used only under desperate circumstances or in special scenarios. Regardless of the developers of various technologies who touted enhancements of productivity through the use of MT, in most cases there was no increase in productivity or it was so minor that it couldn't justify the cost of setting up a process with machine translation and replacing good translators with more accommodating methods.

The situation improved slightly with the emergence of statistical machine translation technology. It was no longer necessary to describe grammar rules with the help of dozens of linguists. Instead, a number of good translations could be fed to the Moses engine, which would then produce a result in which approximately 25% was suitable for editing. The result was no better than the well-developed RBMT mechanism, but it turned out incomparably faster. However, the real performance gains in the production process were small.

The emergence of neural networks and particularly the methods of so-called "deep learning" have fundamentally changed the landscape. Well-trained neural networks provide a relatively coherent overall text. Occasionally, this output turns out to be so good that "smooth" sentences form the majority.

This is constantly being written and spoken about on every corner, most often by people who haven't translated professionally and therefore can't appreciate the value of such a service. One gets the impression that there's nothing left to do and professional translators are no longer needed–almost a dying profession!

The actual picture, though, is considerably different. It isn't yet appropriate to talk about artificial intelligence in relationship to mechanism of machine translations. First, precisely because machine translation is so far just a mechanism, and second, because this mechanism has the following problems:

1. ACCURACY (ADEQUACY) ISN’T GUARANTEED: Modern neural networks with so-called deep-learning capability work more efficiently with the input stream. At first glance, their output represents an improvement over what has come before. However, the neural network doesn't "think" and can neither find an error in the source code (if it's there) nor assess the accuracy and adequacy of its output. The more good data is fed as input, the better the output will be. However, the accuracy of such a "translation" isn't assured. However, you can be certain that the error in the source text will be reproduced in the translation without being corrected.

2. GARBAGE IN, GARBAGE OUT: The mechanism of machine translation, of course, is unable to assess the quality of the material that's fed to it, and if one puts mediocre translations into it, the result too will be mediocre. We see it on many occasions.

3. NO TRAINING IN THE SUBJECT AREA: The commonly accessible mechanism of machine translation is trained on huge amounts of publicly available data, consisting mainly of general vocabulary similar to what is found in media publications. However, that data pool also contains translations of output from United Nations and the European Commission, mainly as translations of social and legal topics. It is difficult to find open high-quality translations in other topics, however, because that data is privately held. (Certainly the mechanism can be trained and the result will be slightly better but only the owner of this data can do so.)

4. INCONSISTENT RESULTS: Developers are constantly changing the learning algorithms and these mechanisms are often broken, thereby yielding wildly inconsistent quality or even gibberish. So, even if you have chosen a stock mechanism that is better suited to your particular case than the others, there's no guarantee that it won't break at some point and the publication of the unedited MT "this" won't become a total confusion.

5. NEURAL NETWORK TERMINOLOGY STILL ISN'T SMART ENOUGH: Even if the mechanism is trained on a large amount of data in your subject area with uniformly good terminology, there's no guarantee that a particular term will be translated correctly as opposed to descriptively, such as with the use of an incorrect synonym or even by another word. Alas, the neural network doesn't "understand" anything but only selects the most frequently used word combinations. Terms are rare words in the text, particularly in comparison to common vocabulary. It's these special concepts that require careful terminological work, which should be performed not so much by terminologists but by specialists in a specific subject area. This problem is well known, and during the past year or two (2018-2019) developers have implemented "workarounds of terminology implementation based on user glossaries on top of already trained and working MT mechanisms. However, these crutches don't guarantee that the term will be translated exactly as indicated in the glossary. Therefore, a complete reconciliation of all terms in machine translation output, even with the glossary (on a specially trained MT mechanism), can't be abandoned.

6. IMPOVERISHED, MORE PRIMITIVE LANGUAGE: The science of psychology has the notion "status quo effect," i.e., the unwillingness to introduce changes to the usual way of doing things but instead to follow rules and habits. So, although the mechanism of machine translation doesn't possess intelligence, the effect of the "status quo" in its issuance is manifested to the greatest extent. If there was an exact match with the new original sentence in the original body of the training data, the mechanism will work as a translation memory and provide translation on part with the previous version. If it's necessary to translate a new sentence, the mechanism will choose the most frequently used linguistic option. On one hand, this is absolutely correct because it allows you to accurately reproduce the language usage of the subject matter area represented by the training corpus. On the other hand, when using the mechanism of machine translation, similar, like phrases accumulate, and in general the lexical density and lexical variety of translation is significantly reduced, which is proved by scientific research.

7. BAD SOURCE TEXT: When working with journalistic material one can always argue that it's "just an article" and the journalist has simplified the material for the reader in some authorial way. Consequently, the translation of media materials needn't be accurate. However, content of a technical nature should accurately and reliably convey the full meaning material. It's in technical texts that it often happens that the author of the source text himself didn't really understand the technical details that the text actually describes. Otherwise, the source text was written by an author who lacked native fluency in the source language and couldn't express himself correctly. Or, money was saved in the generation of source text. Or the source text is a poor translation from another language. Otherwise, there's the task of additional translation of materials that have previously been poorly translated (the existing translations may be full of errors, may be missing or conflicting terminology, including the names of important entities and documents). It might also be that the author is a specialist who omits important information that seems obvious only to him and those of his level but the mechanism of machine translation won't create an equally "correctly encoded" translation, and the MT output will be incorrect. In all these cases (this list is far from complete), the result is the same: material that is essentially incorrect or very poorly formulated will also be poorly formulated in terms of neural network output. Of course, it isn't generally the translator's job to improve the material being translated (although it does ensure the level of quality requested by the customer). However, a text that isn't very clear can of course be translated just as incomprehensibly but professional translator will translate it clearly by choosing slightly more correct words, terms and expressions. When using machine translation, you can be sure that the translation won't be more sensible than bad source text.

8. IT IS IMPOSSIBLE TO ISSUE INSTRUCTIONS AND DEMAND COMPLIANCE: The previous point is connected with the simple idea that it's impossible to give out the guidelines, demand the observance of the level of quality or any requirements whatsoever from the "artificial intelligence" simply because it isn't "artificial intelligence" but is instead a mechanism trained on specific data.

9. POOR ADAPTATION TO THE USAGE OF THE TARGET LANGUAGE, LOCAL TRADITIONS AND STANDARDS: The target technical language has its own traditions and is very different from typical American English in several important ways. The translation of such text by the stock engine, even with checking accuracy and terminology, will give a result that doesn't entirely correspond to the standard target language industry and government specifications, vocabulary and style familiar to the engineer. Of course, if one "feeds" a neural network with good adapted translations, the situation will improve. However, the enhancement will be modest because the "lexical distance" of a well-adapted translation from the original is much larger than that of the literal, word-for-word translation, which a mechanism of machine translation produces.

10. LACK OF UNIFORMITY, COHERENCE OR CONSISTENCY: The data that the MT mechanism was taught with, even if it’s all good translations, was created by different people and even on the same topic contains language from a variety of documents, products, and projects. One shouldn't be surprised that a mechanism of machine translation trained on such a morass of styles, genres and expressive modes would produce a result lacking in character, nuance, sophistication and native fluency.

11. LACK OF COHERENCE AT THE PARAGRAPH LEVEL AND IN THE ENTIRE DOCUMENT: The machine translation engine is unable to create a coherent document, it translates only individual sentences. Recently, however, attempts have been made to incorporate context at the level of the previous and next sentences (but not the whole paragraph), but, success remains elusive.

12. STRONG INTERFERENCE: Interference in linguistics is the influence of the vocabulary and usage of the source language on the language of the translation. The translation has more interference with the source text than what is written from scratch on the basis of source text, and the post-edited translation has even more, as recent studies have shown. In other words, any even heavily edited MT output is closer to a literal transfer of the original than a complete human translation.

13. NO ACCENTING THE STRUCTURE OF A CONNECTED TEXT: A linked text has a structure, but its parts aren't equal. As with the beginning and end of any document, the introduction and annotation must be expressed through concise language. The placement of several sentences in a separate paragraph strengthens the semantic emphasis and requires editing in comparison with the monolithic text. The last sentence of the paragraph is the center of the so-called contextually relevant information, and in a truly coherent text its position is strong. These and other aspects disappear completely when the mechanism of machine translation operates. Instead of a structured presentation, the reader receives a strictly literal "translation," which has the same relationship to the final quality work as ground meat does to a featured entree.

Consequently, the situation is such that the "hype" of machine translation, according to which the claimed "human parity of MT output" is achieved, doesn't correspond to reality because, in order to achieve a final product with guaranteed completeness and accuracy, significant human effort must be applied to ensure linguistic quality, consistency and appropriateness of vocabulary.

These efforts aren't as fruitful as transfer from scratch, and clearly they fail to meet the high expectations of contemporary customers.

One can understand the perspective of the customer, who was told about "human parity" but heard no mention that the term was purely speculative and therefore had a nebulous context and meaning. Nothing was said about the fundamental difficulties, which still require the skills of qualified professionals.

Thus, the question of how much the productivity of a professional translation service provider increases in the use of machine translation is a topic that deserves separate consideration. We'll explore the issue further in our responses to other questions.

12 December 2020

Get More Info

Your request has been submitted, thank you!
Something went wrong and your data has not been successfully submitted, try again later.

To fill in this form, please turn OFF any (if any) ad blockers such as Adblock at this site