The march of Machine Translation into academia

Cognitive and psycholinguistic approaches pioneered the modern scientific study of translation and interpreting in the West. The first empirical research worthy of the name was Jesús Sanz’s (1930) interview study of conference interpreters. At a time when conference interpreting was becoming recognized as a profession, Sanz focused on foreseeable aspects like their training and their working methods and conditions, but also on cognitive skills such as intuition and memory.

The increasingly tense pre-war atmosphere culminating in the Spanish civil war (1936–1939) and WWII (1939–1945) led most research initiatives unrelated to warfare to a halt. In contrast, the post-war years brought about two major moves that would foster the birth of our academic discipline: a strategic interest from governments in machine translation and the so-called cognitive revolution.

Warren Weaver

[…] one naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography. When I look at an article in Russian, I say “This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.

Have you ever thought about this? As a linguist and expert on computers, do you think it is worth thinking about?

Letter from Warren Weaver to Prof. Norbert Wiener in 1947, as quoted in Weaver’s 1949

Memorandum

BrainMachineMechanical dictionaries based on universal numerical codes were suggested by Descartes and Leibniz in the 17th century, and tried by people like Cave Beck. The modern history of Machine Translation (MT), however, is usually associated with the development of computers, starting at the mid-20th century.

In WWII, information theory was applied to deciphering messages from the enemy and cryptography experienced a great leap forward thanks to the efforts of researchers such as Alan Turing. After their victory, the USA and the Soviet Union became competitors again, and the Cold War was also staged in science. In the USA, the defense, intelligence, and scientific communities were very interested in reading everything published in Russian; conversely, most Soviet efforts concentrated on English-to-Russian translation. Given the huge amount of documents to be translated, these goals were nearly impossible to reach by manpower, and both countries were prepared to fund research to make MT possible.

In 1949, Warren Weaver published a memorandum where he sketched what would become the basic assumptions behind the first steps of MT: that languages had universal features and a common, underlying logic, and that cryptography and statistical techniques could be applied to translating. Soon research teams began their work at UCLA, the University of Washington and the MIT, where in 1951 Yehoshua Bar-Hillel became the first full-time MT researcher and the head of a team that included a young Noam Chomsky. In 1954, the first Russian-English MT system—a joint effort of IBM and Georgetown University—was demonstrated to the public.

It would soon become obvious that translating was not such a straigthforward task as had been surmised, and MT research divided into two main orientations: (a) statistical approaches that tried to apprehend lexical and syntactic regularities through trial-and-error; and (b) theoretical approaches focusing on basic linguistic research. In 1966, the ALPAC report argued that MT had not lived up to all its hype—it was less accurate, slower [probably due to post-editing needs], and twice as expensive as human translation—and concluded that there was no immediate or predictable prospect of useful MT, and that more basic research was necessary.

The ALPAC report also stated “In order to have an appreciation either of the underlying nature and difficulties of translation or of the present resources and problems of translation, it is necessary to know something about human translation and human translators.” The ALPAC Committee recommended that funding should concentrate on computational linguistics and on learning about areas such as the overall translation process and improving human translation practice, finding ways to speed it up, and improving quality evaluation. Let us now backtrack a little bit to see the other source of inspiration for the birth of cognitive and psycholinguistic approaches to translation and interpreting: the cognitive revolution.

Contacta con nosotros

Puedes enviarnos un email y te responderemos lo antes posible, gracias.

Not readable? Change text.

Escriba el texto y presione enter para buscar