Ministry of higher and secondary special education of the republ
Interpretation apps An alternative to traditional interpretation systems is mobile apps. IT specialists in the simultaneous interpretation field developed systems that can work alone or in combination with traditional interpretation hardware.
Simultaneous interpretation apps are mobile systems that stream real-time audio on listeners' phones through local wifi or listeners' mobile data. The speaker's stream is transmitted to interpreters who then, with a special broadcaster or traditional consoles, stream their interpretations. Interpreters can work either onsite or remotely, in which case interpretation booths are no longer needed. Likewise, people can listen to the stream from anywhere.
A mobile application or app is a computer program or software application designed to run on a mobile device such as a phone, tablet, or watch. Mobile applications often stand in contrast to desktop applications which are designed to run on desktop computers, and web applications which run in mobile web browsers rather than directly on the mobile device.
Apps were originally intended for productivity assistance such as email, calendar, and contact databases, but the public demand for apps caused rapid expansion into other areas such as mobile games, factory automation, GPS and location-based services, order-tracking, and ticket purchases, so that there are now millions of apps available. Many apps require Internet access. Apps are generally downloaded from app stores, which are a type of digital distribution platforms.
The term "app", short for "software application", has since become very popular; in 2010, it was listed as "Word of the Year" by the American Dialect Society.[1] Apps are broadly classified into three types: native apps, hybrid and web apps. Native applications are designed specifically for a mobile operating system, typically iOS or Android. Web apps are written in HTML5 or CSS and typically run through a browser. Hybrid apps are built using web technologies such as JavaScript, CSS, and HTML5 and function like web apps disguised in a native container.[ Unlike in monolingual communication, in simultaneous interpreting (SI) a message in one language is perceived and processed almost concurrently with the production of an equivalent message in another language. To be able to accomplish this feat, besides high proficiency in both the source and target languages, the interpreter must possess a set of specialized skills, including exceptional language switching abilities, large working memory (WM) span, ability to manipulate WMcontent and understand incoming discourse while producing a rendering of an earlier portion of the source message in the target language. By its nature, SI is externally paced, indicating the need for cognitive resource management and coping strategies.
In SI, an interpreter usually begins interpreting before the speaker has finished a sentence. The speaker, however, does not normally wait to move on to the next utterance, regardless of whether the interpreter has completed the translation of the previous chunk. Moreover, it may not always be possible or convenient to maintain sequential linearity of the target message relative to the source. For example, interpreters often reverse the order of lists. In some language
combinations, e.g. German/English, syntactic constraints force one to wait for the final verb in the German source to construct the target sentence in English [12]. Finally, the interpreter may choose to defer translating a word until a good enough equivalent comes to mind, hoping to be able to work it into the target message later. The resulting source-target lag—also referred to as dйcalage or ear-voice-span (EVS) in the interpretation studies literature—between the source and the target messages highlights the critical role of WM in the SI pipeline. WM represents a mental space within which to perform the transformations needed for a coherent and accurate target message to emerge.
Under normal circumstances, when the source message is relatively easy to understand and target equivalents are quickly and automatically retrieved from long-term memory (LTM), the interpreter maintains a comfortable deґcalage, accurately rendering the source message with almost no omissions. But when confronted with a long-winded, dense or obscure passage, the interpreter may be forced out of the comfort zone and temporarily increase the lag to accommodate
the need for more time to process it. The lag is similar to debt in that beyond a certain point it becomes difficult to handle. In extreme cases, when the interpreter gets too far behind to speaker, performance quality may be compromised: parts of the source message may get severely distorted or go missing from the translation altogether. This may happen when the interpreter has shifted much of his/her attention away from the currently articulated source chunk to finish processing the previous one stored in WM, in order to catch up with the speaker. In sum, large lags are most likely caused by processing difficulties.
On the other hand, when the source message is overall relatively difficult to follow (e.g. when the message is not in the interpreter’s mother tongue), the interpreter may need to allocate extra effort towards understanding. This can be done by shortening the deґcalage, effectively limiting the amount of information to be processed in working memory. Such a strategy may result in a more literal translation that is likely to be syntactically and grammatically deficient.
In our opinion, the above considerations are best captured by Gile’s Efforts Model which conceptualizes SI in terms of three groups or mental operations, or ‘efforts’: listening, production and memory. Since these efforts are mostly non-automatic and concurrent, they critically depend on and compete for the limited pool of attentional resources. A major implication of the model is that increased processing demands in one of the efforts can only be met at the expense of another. In fact, several studies involving dual-task situations indirectly support this view suggesting that transient performance decreases in one task occur due to the engagement of attention in another task (e.g.).
To our knowledge, only one study has attempted to test the Efforts Model of SI experimentally. But as its author himself admitted, “it cannot be said to have led to [its] systematic testing or validation” and also suggested that “precise quantitative measurement” would help to make it more useful. To address this concern (at least partially), in the present paper we used the ERP technique to test one particular prediction of the Efforts Model, namely that increased processing demands on the ‘memory effort’ means less processing capacity available to the ‘listening effort’ (that involves active processing of the input heard). In other words, a higher WM load would create a deficit of attention to the auditory stream. Whereas this hypothesis may seem quite intuitive, to our knowledge, it has never been tested experimentally in a naturalistic setting requiring the participants to interpret continuous prose overtly.
Electrophysiological evidence supporting it would suggest that interpreters’ brains gate part of the auditory input to be able to properly process the information backlog and reduce the associated processing pressure. Here and throughout we refer to the ‘memory’ and ‘listening’ efforts as defined by Gile. We exploited the previous findings that N1 and even P1 amplitude evoked by task irrelevant probes embedded in a speech stream is modulated by selective attention in what is called ‘processing negativity’ observed as early as 50–150 ms from the stimulus onset. Specifically, the ERP waveform appears shifted towards negative values when the listener attends the target audio. Moreover, a more recent EEG study showed that in a multitasking situation—and SI is an extreme case of multitasking —increased WM load decreases attention to the targets. Therefore, the parameters of these early auditory ERP components can be used as a suitable and temporally precise index of interpreters’ attention to the spoken source message.