Azərbaycan Respublikası “ÇAĞ” Öyrətim İşlətmələri


«TƏRCÜMƏŞÜNASLIQ VƏ ONUN MÜASİR DÖVRDƏ ROLU»   IV Respublika tələbə elmi-praktik konfransı



Yüklə 3,6 Kb.
Pdf görüntüsü
səhifə43/45
tarix26.02.2017
ölçüsü3,6 Kb.
#9778
1   ...   37   38   39   40   41   42   43   44   45

«TƏRCÜMƏŞÜNASLIQ VƏ ONUN MÜASİR DÖVRDƏ ROLU»   IV Respublika tələbə elmi-praktik konfransı 
 
 
336 
the cultures .It also brings into focus the important question of cultural identity. Else 
Ribeiro Pires Vieira (1999:42) remarks that it is ultimately impossible to translate 
one cultural identity into another. So the act of translation is intimately related to the 
question of cultural identity, difference and similarity. 
A rather interesting approach to literary translation comes from Michel Riffaterre 
(1992: 204-217). He separates literary and non-literary use of language by saying 
that literature is different because i) it semioticicizes the discursive features e.g. lexical 
selection is made morphophonemically as well as semantically, ii) it substitutes se-
miosis for mimesis which gives literary language its indirection, and iii) it has "the` 
textuality' that integrates semantic components of the verbal sequence (the ones open 
to linear decoding)-a theoretically open-ended sequence-into one closed, finite se-
miotic, system" that is , the parts of a literary texts are vitally linked to the whole of 
the text and the text is more or less self contained. Hence the literary translation 
should "reflect or imitate these differences". He considers a literary text as an artefact 
and it contains the signals, which mark it as an artifact. Translation should also imitate 
or reflect these markers. He goes on to say that as we perceive a certain text as literary 
based on certain presuppositions we should render these literariness inducing pre-
suppositions. Though this seems rather like traditional and formalist approach, what 
should be noted here is that Riffaterre is perceiving literariness in a rather different 
way while considering the problems of literary translation: `literariness' is in no way 
the `essence' of a text and a literary text is, for Riffatere one that which contains the 
signs which makes it obvious that it is a cultural artefact. Although he conceives of 
literary text as self-contained system, Riffatere too, like many other contemporary 
approaches sees it as a sub-system of cultural semiotic system. However, if one is 
to consider Riffatere's notion of `text' in contrast to Kristeva's notion of intertextuality 
one feels that Riffaterre is probably simplifying the problem of cultural barriers to 
translatability. 
The assumption that literary text is a cultural artefact and is related to the other 
social systems is widespread these days. Some of the most important theorization 
based on this assumption has come from provocative and insightful perspectives of 
theorists like Andre Lefevere, Gideon Toury, Itamar Evan -Zohar, and Theo Hermans. 
These theorists are indebted to the concept of `literature as system' as propounded 
by Russian Formalists like Tynianov, Jakobson, and Czech Structuralists like Mu-
karovsky and Vodicka, the French Structuralists thinkers, and the Marxist thinkers 
who considered literature as a section of the `superstructure'. The central idea of this 
point of view is that the study of literary translation should begin with a study of the 
translated text rather than with the process of translation, its role, function and re-
ception in the culture in which it is translated as well as the role of culture in influencing 
the `process of decision making that is translation.' It is fundamentally descriptive in 
its orientation (Toury 1985). 
Lefevere maintains, `Literature is one of the systems which constitute the system 
of discourses (which also contain disciplines like physics or law.) usually referred 

Materiallar 
                                                                                                                             07 may 2011-ci il
 
 
337 
to as a civilization, or a society (1988:16).' Literature for Lefevere is a subsystem of 
society and it interacts with other systems. He observes that there is a `control factor 
in the literary system which sees to it that this particular system does not fall too far 
out of step with other systems that make up a society ' (p.17). He astutely observes that 
this control function works from outside of this system as well as from inside. The 
control function within the system is that of dominant poetics, `which can be said to 
consist of two components: one is an inventory of literary devices, genres, motifs, 
prototypical characters and situations, symbols; the other a concept of what the role 
of literature is, or should be, in the society at large.' (p.23). The educational estab-
lishment dispenses it. The second controlling factor is that of `patronage'. It can be 
exerted by `persons, not necessarily the Medici, Maecenas or Louis XIV only, groups 
or persons, such as a religious grouping or a political party, a royal court, publishers, 
whether they have a virtual monopoly on the book trade or not and, last but not least, 
the media.' The patronage consists of three elements; the ideological component, the 
financial or economic component, and the element of status (p.18-19). The system of 
literature, observes Lefevere, is not deterministic but it acts as a series of `constraints' 
on the reader, writer, or rewriter. The control mechanism within the literary system is 
represented by critics, reviewers, teachers of literature, translators and other rewriters 
who will adapt works of literature until they can be claimed to correspond to the 
poetics and the ideology of their time. It is important to note that the political and 
social aspect of literature is emphasised in the system approach. The cultural politics 
and economics of patronage and publicity are seen as inseparable from literature. 
`Rewriting' is the key word here which is used by Lefevere as a `convenient `umbrella-
term' to refer to most of the activities traditionally connected with literary studies: 
criticism, as well as translation, anthologization, the writing of literary history and 
the editing of texts-in fact, all those aspects of literary studies which establish and 
validate the value-structures of canons. Rewritings, in the widest sense of the term, 
adapt works of literature to a given audience and/or influence the ways in which 
readers read a work of literature.' (60-61). The texts, which are rewritten, processed 
for a certain audience, or adapted to a certain poetics, are the `refracted' texts and 
these maintains Lefevere are responsible for the canonized status of the text (p179). 
`Interpretation (criticism), then and translation are probably the most important forms 
of refracted literature, in that they are the most influential ones' he notes (1984:90) 
and says, 
`One never translates, as the models of the translation process based on the 
Buhler/Jakobson communication model, featuring disembodied senders and receivers, 
carefully isolated from all outside interference by that most effective expedient, the 
dotted line, would have us believe, under a sort of purely linguistic bell jar. Ideological 
and poetological motivations are always present in the production, or the non pro-
duction of translations of literary works...Translation and other refractions, then, play 
a vital part in the evolution of literatures, not only by introducing new texts, authors 
and devices, but also by introducing them in a certain way, as part of a wider design 
to try to influence that evolution' (97) . 

«TƏRCÜMƏŞÜNASLIQ VƏ ONUN MÜASİR DÖVRDƏ ROLU»   IV Respublika tələbə elmi-praktik konfransı 
 
 
338 
Translation becomes one of the parts of the `refraction' "... the rather long term 
strategy, of which translation is only a part, and which has as its aim the manipulation 
of foreign work in the service of certain aims that are felt worthy of pursuit in the 
native culture..." (1988:204). This is indeed a powerful theory to study translation as 
it places as much significance to it as criticism and interpretation. Lefevere goes on to 
give some impressive analytical tools and perspectives for studying literary translation. 
`The ideological and poetological constraints under which translations are pro-
duced should be explicated, and the strategy devised by the translator to deal with 
those constraints should be described: does he or she make a translation in a more 
descriptive or in a more refractive way? What are the intentions with which he or 
she introduces foreign elements into the native system? Equivalence, fidelity, freedom 
and the like will then be seen more as functions of a strategy adopted under certain 
constraints, rather than absolute requirements, or norms that should or should not be 
imposed or respected. It will be seen that `great 'ages of translation occur whenever 
a given literature recognizes another as more prestigious and tries to emulate it. Lite-
ratures will be seen to have less need of translation (s) when they are convinced of 
their own superiority. It will also be seen that translations are often used (think of the 
Imagists) by adherents of an alternative poetics to challenge the dominant poetics of a 
certain period in a certain system, especially when that alternative poetics cannot use 
the work of its own adherents to do so, because that work is not yet written' (1984:98-99). 
Another major theorist working on similar lines as that of Lefevere is Gideon 
Toury (1985). His approach is what he calls Descriptive Translation Studies (DTS). 
He emphasizes the fact that translations are facts of one system only: the target system 
and it is the target or recipient culture or a certain section of it, which serves as the 
initiator of the decision to translate and consequently translators operate first and 
foremost in the interest of the culture into which they are translating. Toury very 
systematically charts out a step by step guide to the study of translation. He stresses 
that the study should begin with the empirically observed data, that is, the translated 
texts and proceeds from there towards the reconstruction of non-observational facts 
rather than the other way round as is usually done in the `corpus' based and traditional 
approaches to translation. The most interesting thing about Toury's approach (1984) 
is that it takes into consideration things like `pseudo-translation' or the texts foisted 
off as translated but in fact are not so. In the very beginning when the problem of 
distinguishing a translated text from a non-translated text arises, Toury assumes 
that for his procedure `translation' will be taken to be `any target-language utterance 
which is presented or regarded as such within the target culture, on whatever grounds'. 
In this approach pseudotranslations are `just as legitimate objects for study within 
DTS as genuine translations. They may prove to be highly instructive for the estab-
lishment of the general notion of translation as shared by the members of a certain 
target language community'. 
 

Materiallar 
                                                                                                                             07 may 2011-ci il
 
 
339 
HISTORY OF MACHINE TRANSLATION 
Kənan NURİ 
Translation 3 
 
The history of machine translation generally starts in the 1950s, although work 
can be found from earlier periods. The Georgetown experiment in 1954 involved 
fully automatic translation of more than sixty Russian sentences into English. The 
experiment was a great success and ushered in an era of significant funding for 
machine translation research in the United States. The authors claimed that within 
three or five years, machine translation would be a solved problem. In the Soviet 
Union, similar experiments were performed shortly after. 
However, the real progress was much slower, and after the ALPAC report in 1966, 
which found that the ten years of research had failed to fulfill the expectations, and 
funding was dramatically reduced. Starting in the late 1980s, as computational power 
increased and became less expensive, more interest began to be shown in statistical 
models for machine translation. 
Today there is still no system that provides the holy-grail of "fully automatic high 
quality translation of unrestricted text" (FAHQUT). However, there are many prog-
rams now available that are capable of providing useful output within strict constraints; 
several of them are available online, such as Google Translate and the SYSTRAN 
system which powers AltaVista's (Yahoo's since May 9, 2008) BabelFish. 
THE BEGINNING 
The history of machine translation dates back to the seventeenth century, when 
philosophers such as Leibniz and Descartes put forward proposals for codes which 
would relate words between languages. All of these proposals remained theoretical, 
and none resulted in the development of an actual machine. 
The first patents for "translating machines" were applied for in the mid 1930s. 
One proposal, by Georges Artsrouni was simply an automatic bilingual dictionary 
using paper tape. The other proposal, by Peter Troyanskii, a Russian, was more detailed. 
It included both the bilingual dictionary, and a method for dealing with grammati-
cal roles between languages, based on Esperanto. The system was split up into three 
stages: the first was for a native-speaking editor in the sources language to organize 
the words into their logical forms and syntactic functions; the second was for the 
machine to "translate" these forms into the target language; and the third was for a 
native-speaking editor in the target language to normalize this output. His scheme 
remained unknown until the late 1950s, by which time computers were well-known. 
THE EARLY YEARS 
The first proposals for machine translation using computers were put forward 
by Warren Weaver, a researcher at the Rockefeller Foundation, in his July, 1949 

«TƏRCÜMƏŞÜNASLIQ VƏ ONUN MÜASİR DÖVRDƏ ROLU»   IV Respublika tələbə elmi-praktik konfransı 
 
 
340 
memorandum. These proposals were based on information theory, successes of code 
breaking during the Second World War and speculation about universal underlying 
principles of natural language. 
A few years after these proposals, research began in earnest at many universities 
in the United States. On 7 January 1954, the Georgetown-IBM experiment, the first 
public demonstration of an MT system, was held in New York at the head office of 
IBM. The demonstration was widely reported in the newspapers and received much 
public interest. The system itself, however, was no more than what today would be 
called a "toy" system, having just 250 words and translating just 49 carefully selected 
Russian sentences into English — mainly in the field of chemistry. Nevertheless it 
encouraged the view that machine translation was imminent — and in particular 
stimulated the financing of the research, not just in the US but worldwide. 
Early systems used large bilingual dictionaries and hand-coded rules for fixing 
the word order in the final output. This was eventually found to be too restrictive, and 
developments in linguistics at the time, for example generative linguistics and transfor-
mational grammar were proposed to improve the quality of translations. 
During this time, operational systems were installed. The United States Air Force 
used a system produced by IBM and Washington University, while the Atomic 
Energy Commission in the United States and EURATOM in Italy used a system 
developed at Georgetown University. While the quality of the output was poor, it 
nevertheless met many of the customers' needs, chiefly in terms of speed. 
At the end of the 1950s, an argument was put forward by Yehoshua Bar-Hillel, 
a researcher asked by the US government to look into machine translation against 
the possibility of "Fully Automatic High Quality Translation" by machines. The argu-
ment is one of semantic ambiguity or double-meaning. Consider the following sentence: 
Little John was looking for his toy box. Finally he found it. The box was in the pen. 
The word pen may have two meanings, the first meaning something you use to 
write with, the second meaning a container of some kind. To a human, the meaning 
is obvious, but he claimed that without a "universal encyclopedia" a machine would 
never be able to deal with this problem. Today, this type of semantic ambiguity can 
be solved by writing source texts for machine translation in a controlled language 
that uses a vocabulary in which each word has exactly one meaning. 
THE 1960s, THE ALPAC REPORT AND THE SEVENTIES 
Research in the 1960s in both the Soviet Union and the United States concen-
trated mainly on the Russian-English language pair. Chiefly the objects of translation 
were scientific and technical documents, such as articles from scientific journals. The 
rough translations produced were sufficient to get a basic understanding of the articles. 
If an article discussed a subject deemed to be of security interest, it was sent to a 
human translator for a complete translation; if not, it was discarded. 
A great blow came to machine translation research in 1966 with the publication 
of the ALPAC report. The report was commissioned by the US government and 

Materiallar 
                                                                                                                             07 may 2011-ci il
 
 
341 
performed by ALPAC, the Automatic Language Processing Advisory Committee, a 
group of seven scientists convened by the US government in 1964. The US gover-
ment was concerned that there was a lack of progress being made despite significant 
expenditure. It concluded that machine translation was more expensive, less accurate 
and slower than human translation, and that despite the expenses, machine translation 
was not likely to reach the quality of a human translator in the near future. 
The report, however, recommended that tools be developed to aid translators -  
automatic dictionaries, for example - and that some research in computational lin-
guistics should continue to be supported. 
The publication of the report had a profound impact on research into machine 
translation in the United States, and to a lesser extent the Soviet Union and United 
Kingdom. Research, at least in the US, was almost completely abandoned for over 
a decade. In Canada, France and Germany, however, research continued. In the US 
the main exceptions were the founders of Systran (Peter Toma) and Logos (Bernard 
Scott), who established their companies in 1968 and 1970 respectively and served 
the US Dept of Defense. In 1970, the Systran system was installed for the United 
States Air Force and subsequently in 1976 by the Commission of the European Com-
munities. The METEO System, developed at the Université de Montréal, was installed 
in Canada in 1977 to translate weather forecasts from English to French, and was 
translating close to 80,000 words per day or 30 million words per year until it was 
replaced by a competitor's system on the 30th September, 2001. 
While research in the 1960s concentrated on limited language pairs and input, 
demand in the 1970s was for low-cost systems that could translate a range of technical 
and commercial documents. This demand was spurred by the increase of globalization 
and the demand for translation in Canada, Europe, and Japan. 
THE 1980S AND EARLY 1990s 
By the 1980s, both the diversity and the number of installed systems for machine 
translation had increased. A number of systems relying on mainframe technology 
were in use, such as Systran, Logos, and Metal. 
As a result of the improved availability of microcomputers, there was a market 
for lower-end machine translation systems. Many companies took advantage of this 
in Europe, Japan, and the USA. Systems were also brought onto the market in China, 
Eastern Europe, Korea, and the Soviet Union. 
During the 1980s there was a lot of activity in MT in Japan especially. With the 
Fifth generation computer Japan intended to leap over its competition in computer 
hardware and software, and one project that many large Japanese electronics firms 
found themselves involved in was creating software for translating to and from 
English (Fujitsu, Toshiba, NTT,Brother, Catena, Matsushita, Mitsubishi, Sharp, Sanyo, 
Hitachi, NEC, Panasonic, Kodensha, Nova, and Oki). 
Research during the 1980s typically relied on translation through some variety 
of intermediary linguistic representation involving morphological, syntactic, and 
semantic analysis. 

«TƏRCÜMƏŞÜNASLIQ VƏ ONUN MÜASİR DÖVRDƏ ROLU»   IV Respublika tələbə elmi-praktik konfransı 
 
 
342 
At the end of the 1980s there was a large surge in a number of novel methods 
for machine translation. One system was developed at IBM that was based on statis-
tical methods. Makoto Nagao and his group used methods based on large numbers of 
example translations, a technique which is now termed example-based machine trans-
lation. A defining feature of both of these approaches was the lack of syntactic and 
semantic rules and reliance instead on the manipulation of large text corpora. 
During the 1990s, encouraged by successes in speech recognition and speech 
synthesis, research began into speech translation with the development of the German 
Verbmobil project. 
There was significant growth in the use of machine translation as a result of the 
advent of low-cost and more powerful computers. It was in the early 1990s that 
machine translation began to make the transition away from large mainframe com-
puters toward personal computers and workstations. Two companies that led the PC 
market for a time were Globalink and MicroTac, following which a merger of the 
two companies (in December 1994) was found to be in the corporate interest of both. 
Intergraph and Systran also began to offer PC versions around this time. Sites also 
became available on the internet, such as AltaVista's Babel Fish (using Systran techno-
logy) and Google Language Tools (also initially using Systran technology exclusively). 
RECENT RESEARCH 
The field of machine translation has in the last few years seen major changes. 
Currently a large amount of research is being done into statistical machine translation 
and example-based machine translation. In the area of speech translation, research 
has focused on moving from domain-limited systems to domain-unlimited translation 
systems. In different research projects in Europe (like) and in the United States (STR-
DUST and) solutions for automatically translating Parliamentary speeches and broad-
cast news have been developed. In these scenarios the domain of the content is no 
longer limited to any special area, but rather the speeches to be translated cover a 
variety of topics. More recently, the French-German project Quaero investigates 
possibilities to make use of machine translations for a multi-lingual internet. The 
project seeks to translate not only webpages, but also videos and audio files found 
on the internet. 
Today, only a few companies use statistical machine translation commercially, 
e.g. SDL International / Language Weaver (sells translation products and services), 
Google (uses their proprietary statistical MT system for some language combinations 
in Google's language tools), Microsoft (uses their proprietary statistical MT system 
to translate knowledge base articles), and Ta with you (offers a domain-adapted 
machine translation solution based on statistical MT with some linguistic knowledge). 
There has been a renewed interest in hybridizations, with researchers combining 
syntactic and morphological (i.e., linguistic) knowledge into statistical systems, as 
well as combining statistics with existing rule-based systems. 
 

Yüklə 3,6 Kb.

Dostları ilə paylaş:
1   ...   37   38   39   40   41   42   43   44   45




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin