Ministry of higher and secondary special education of the republic of uzbekistan uzbek state world languages university



Yüklə 372,14 Kb.
səhifə25/27
tarix09.05.2023
ölçüsü372,14 Kb.
#110339
1   ...   19   20   21   22   23   24   25   26   27
Ministry of higher and secondary special education of the republ

Interpretation apps
An al­ter­na­tive to tra­di­tional in­ter­pre­ta­tion sys­tems is mo­bile apps. IT spe­cial­ists in the si­mul­ta­ne­ous in­ter­pre­ta­tion field de­vel­oped sys­tems that can work alone or in com­bi­na­tion with tra­di­tional in­ter­pre­ta­tion hard­ware.
Si­mul­ta­ne­ous in­ter­pre­ta­tion apps are mo­bile sys­tems that stream real-time audio on lis­ten­ers' phones through local wifi or lis­ten­ers' mo­bile data. The speaker's stream is trans­mit­ted to in­ter­preters who then, with a spe­cial broad­caster or tra­di­tional con­soles, stream their in­ter­pre­ta­tions. In­ter­preters can work ei­ther on­site or re­motely, in which case in­ter­pre­ta­tion booths are no longer needed. Like­wise, peo­ple can lis­ten to the stream from any­where.
A mo­bile application or app is a com­puter pro­gram or soft­ware ap­pli­ca­tion de­signed to run on a mo­bile de­vice such as a phone, tablet, or watch. Mo­bile ap­pli­ca­tions often stand in con­trast to desk­top ap­pli­ca­tions which are de­signed to run on desk­top com­put­ers, and web ap­pli­ca­tions which run in mo­bile web browsers rather than di­rectly on the mo­bile de­vice.
Apps were orig­i­nally in­tended for pro­duc­tiv­ity as­sis­tance such as email, cal­en­dar, and con­tact data­bases, but the pub­lic de­mand for apps caused rapid ex­pan­sion into other areas such as mo­bile games, fac­tory au­toma­tion, GPS and lo­ca­tion-based ser­vices, or­der-track­ing, and ticket pur­chases, so that there are now mil­lions of apps avail­able. Many apps re­quire In­ter­net ac­cess. Apps are gen­er­ally down­loaded from app stores, which are a type of dig­i­tal dis­tri­b­u­tion plat­forms.
The term "app", short for "soft­ware ap­pli­ca­tion", has since be­come very pop­u­lar; in 2010, it was listed as "Word of the Year" by the Amer­i­can Di­alect So­ci­ety.[1]
Apps are broadly clas­si­fied into three types: na­tive apps, hy­brid and web apps. Na­tive ap­pli­ca­tions are de­signed specif­i­cally for a mo­bile op­er­at­ing sys­tem, typ­i­cally iOS or An­droid. Web apps are writ­ten in HTML5 or CSS and typ­i­cally run through a browser. Hy­brid apps are built using web tech­nolo­gies such as JavaScript, CSS, and HTML5 and func­tion like web apps dis­guised in a na­tive container.[
Unlike in monolingual communication, in simultaneous interpreting (SI) a message in one language is perceived and processed almost concurrently with the production of an equivalent message in another language. To be able to accomplish this feat, besides high proficiency in both the source and target languages, the interpreter must possess a set of specialized skills, including exceptional language switching abilities, large working memory (WM) span, ability to manipulate WMcontent and understand incoming discourse while producing a rendering of an earlier portion of the source message in the target language. By its nature, SI is externally paced, indicating the need for cognitive resource management and coping strategies.
In SI, an interpreter usually begins interpreting before the speaker has finished a sentence. The speaker, however, does not normally wait to move on to the next utterance, regardless of whether the interpreter has completed the translation of the previous chunk. Moreover, it may not always be possible or convenient to maintain sequential linearity of the target message relative to the source. For example, interpreters often reverse the order of lists. In some language
combinations, e.g. German/English, syntactic constraints force one to wait for the final verb in the German source to construct the target sentence in English [12]. Finally, the interpreter may choose to defer translating a word until a good enough equivalent comes to mind, hoping to be able to work it into the target message later. The resulting source-target lag—also referred to as dйcalage or ear-voice-span (EVS) in the interpretation studies literature—between the source and the target messages highlights the critical role of WM in the SI pipeline. WM represents a mental space within which to perform the transformations needed for a coherent and accurate target message to emerge.
Under normal circumstances, when the source message is relatively easy to understand and target equivalents are quickly and automatically retrieved from long-term memory (LTM), the interpreter maintains a comfortable deґcalage, accurately rendering the source message with almost no omissions. But when confronted with a long-winded, dense or obscure passage, the interpreter may be forced out of the comfort zone and temporarily increase the lag to accommodate
the need for more time to process it. The lag is similar to debt in that beyond a certain point it becomes difficult to handle. In extreme cases, when the interpreter gets too far behind to speaker, performance quality may be compromised: parts of the source message may get severely distorted or go missing from the translation altogether. This may happen when the interpreter has shifted much of his/her attention away from the currently articulated source chunk to finish processing the previous one stored in WM, in order to catch up with the speaker. In sum, large lags are most likely caused by processing difficulties.
On the other hand, when the source message is overall relatively difficult to follow (e.g. when the message is not in the interpreter’s mother tongue), the interpreter may need to allocate extra effort towards understanding. This can be done by shortening the deґcalage, effectively limiting the amount of information to be processed in working memory. Such a strategy may result in a more literal translation that is likely to be syntactically and grammatically deficient.
In our opinion, the above considerations are best captured by Gile’s Efforts Model which conceptualizes SI in terms of three groups or mental operations, or ‘efforts’: listening, production and memory. Since these efforts are mostly non-automatic and concurrent, they critically depend on and compete for the limited pool of attentional resources. A major implication of the model is that increased processing demands in one of the efforts can only be met at the expense of another. In fact, several studies involving dual-task situations indirectly support this view suggesting that transient performance decreases in one task occur due to the engagement of attention in another task (e.g.).
To our knowledge, only one study has attempted to test the Efforts Model of SI experimentally. But as its author himself admitted, “it cannot be said to have led to [its] systematic testing or validation” and also suggested that “precise quantitative measurement” would help to make it more useful. To address this concern (at least partially), in the present paper we used the ERP technique to test one particular prediction of the Efforts Model, namely that increased processing demands on the ‘memory effort’ means less processing capacity available to the ‘listening effort’ (that involves active processing of the input heard). In other words, a higher WM load would create a deficit of attention to the auditory stream. Whereas this hypothesis may seem quite intuitive, to our knowledge, it has never been tested experimentally in a naturalistic setting requiring the participants to interpret continuous prose overtly.
Electrophysiological evidence supporting it would suggest that interpreters’ brains gate part of the auditory input to be able to properly process the information backlog and reduce the associated processing pressure. Here and throughout we refer to the ‘memory’ and ‘listening’ efforts as defined by Gile. We exploited the previous findings that N1 and even P1 amplitude evoked by task irrelevant probes embedded in a speech stream is modulated by selective attention in what is called ‘processing negativity’ observed as early as 50–150 ms from the stimulus onset. Specifically, the ERP waveform appears shifted towards negative values when the listener attends the target audio. Moreover, a more recent EEG study showed that in a multitasking situation—and SI is an extreme case of multitasking —increased WM load decreases attention to the targets. Therefore, the parameters of these early auditory ERP components can be used as a suitable and temporally precise index of interpreters’ attention to the spoken source message.



Yüklə 372,14 Kb.

Dostları ilə paylaş:
1   ...   19   20   21   22   23   24   25   26   27




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin