Ministry of higher and secondary special education of the republ
1.3 Theoretical aspects on methodological simultaneous interpretation The study of simultaneous interpretation has generated speculative and empirical literature almost since SI emerged as a viable activity in the middle of the last century. Until then translation was a staid, deferred activity, the object of musings about the nature of equivalence and differences between languages. A few professional psychologists have approached the new immediate, situated form of translation with the curiosity which drew their founding fathers to feats of memory and intelligence; cognitive psychological approaches to interpretation are accordingly long on memory and attention modeling, but short on the contentsof memory and the focus of attention, and lack articulation with features of the discourse. Practitioners, meanwhile, have explored their new feat with undisguised pride and tried to distil its essential principles to guide training. The main aims of conference interpretation research have thus emerged as follows:
1. pedagogical: to determine what the activity requires in terms of memory, attention and linguistic proficiency;
2. quality assessment: the search for a reliable metric of quality;
3. to use interpreting as a laboratory to learn more about human language and cognition generally.
Conventional scientific procedure usually follows the sequence: (1) observation and data collection — (2) pattern recognition — (3) hypothesis formation — (4) experimental testing under controlled conditions, where possible — (5) drawing conclusions from the results. Everyone nowadays recognises that none of these steps is neutral: (1) and (2) are selective and directed by preference and habit, and rare are the free spirits in whom (3) is not directed by existing models, which then inevitably influence the conditions chosen for (4) and the assumptions which continue to pervade (5).
However, as a first step towards understanding interpreting processes, or factors in quality, or establishing a theoretical basis for training, it seems reasonable to begin by observing and comparing original discourse and its interpreted versions. Instead of imposing models of memory and attention on the process a priori, we now have accounts of language in communication and models of real-time speech processing which can be applied directly to the data, but have so far been under-utilised in interpreting research. We propose a procedure for discovering processes in simultaneous interpretation through detailed corpus analysis (Setton 1999), followed by a brief discussion of the fundamental insights it offers, and the prospects for fuller formalisation and more ambitious applications such as the evaluation of translations.
The methodology proposed applies a composite of tools of linguistic analysis at different levels with the ultimate aim of generating viable representations of the meanings available from the input and output speech streams. Transcription conventions are proposed to
represent the key meaning-indicating features of discourse — phonological, syntactic, semantic and pragmatic;
capture the temporal dimension of an SI corpus;
model the contexts constructed by participants (interpreter and listeners), without which no projection of available and salient inferences — and hence no evaluation of fidelity — is possible.
Modeling context is a major challenge to translation research. Most translations certified as good and accurate by a panel of experts defy any explanation in terms of pure decoding-encoding: the conventional wisdom recognises that a translator must draw on an external knowledge base, but assumes it is individual and unconstrained, hence impossible to model. This would place translation definitively beyond the reach of scientific analysis, along with other ‘open-ended’ cognitive processes (Fodor 1983). The only hope of connecting our field of research up with cognitive science, and through it to eventual explanatory articulation with, for example, neurological description, lies in constraining the knowledge base to input into our model of interpreting.
Linguistics offers models of speech at various levels for analysing a stream of sound into morpho-phonological units, sentence structure, logical form, intonation units and (if we accept these theories) speech acts or functional units of discourse. Both fundamental research into interpreting processes and applications like quality evaluation will probably need elements from all these levels. Among the most difficult components to capture are the lexical-semantic (the contribution of individual word meanings) and pragmatic dimensions. In both these areas at least, however, theoretical approaches and even the beginnings of formalisation have emerged, which is not yet the case for subtler levels such as affective and non-verbally conveyed meaning.
Morpho-phonological analysis of the speech stream is given largely by the words of the transcript; for less common languages, significant features can be further explained in footnotes. Syntacticians use ‘labelled brackets’ to represent sentence structure. Although this notation was designed for research into the rules behind linguistic forms, syntactic structure is obviously a major contributor to any model of meaning available to hearers from different speech streams, and must therefore be shown.
That such a ‘parsed tree’ might have psychological reality as a distinct stage in speech processing, however, was shown to be implausible at an early stage in speech processing research. Contemporary models are based on ‘lexically driven’ comprehension, in which each incoming word constrains meaning via its semantic frame. But each content word, at least, must simultaneously also evoke complex concepts and associations. The lexical-semantic component of our model should account for the semantic and conceptual material activated by each recognised incoming word, bearing in mind that words evoke a spectrum of potential meanings, of which some become selected more strongly to make sense with their neighbours, in a rapid choice of the best possible fit. The source conceptual material is assumed to be stored in long-term memory in associative structures known as frames, schemas or scripts which facilitate retrieval en bloc (bootstrapping) to working memory. Secondly, modern prototype and mental space theory suggest that words activate not so much points or nodes in semantic space (as in a network) as vectors, while resolutely inferential accounts see words more as ‘pointers’ which may indeed be differently situated but point to the same meaning (Origgi & Sperber 2000). Both these combinatory and inferential aspects make meaning from words very difficult to model.
On the syntax-semantics boundary, a logico-semantic component is necessary to capture referential and logical scope dependencies, particularly in logically complex sentences like (1) and (2):
(1) ‘we didn’t dismantle a whole tier of government only to have it replaced by a bureaucracy in Brussels’ (Margaret Thatcher).
(2) ‘I don’t know what scope delegations which have flagged that they can’t agree to the Council decision this afternoon would like to give the Secretariat to summarise their reservations’
Such logical structures can if desired be represented using sub-indexing, brackets or other logical notations (see for example Kamp and Ryle 1993; Allwood 1977). This kind of analysis can reveal how simultaneous interpreters deal with long-term dependencies by a process of approximation and correction (Setton 1999: 271-274).
In terms of presentation, longer samples can be segmented for better readability. The German speech cited here fell rather naturally into segments of one or two syntactic sentences, except where (as above) a particularly long sentence, with parentheticals and embeddings, could be neatly split at a conjunction. Primary syntactic analysis shows, for instance, that interpreters produce more clause units than the original Speaker.