LECTURE 9 SIMULTANEOUS INTERPRETING
DEFINITION OF SIMULTANEOUS INTERPRETING FEATURES OF SIMULTANEOUS INTERPRETING PROCESSES OF SIMULTANEOUS INTERPRETING
In the definition given by Jones (2002), a simultaneous interpreter listens to the beginning of the speakers comments then begins interpreting while the speech continues, carrying on throughout the speech, to finish almost at the same time as the original. The interpreter is thus speaking simultaneously to the original, hence again the name. Consecutive interpreters are said to produce a more accurate and equivalent interpretation than simultaneous colleagues because the interpreter does not need to split their attention between receiving the message, and monitoring their output, as is required in simultaneous, they can devote more of their processing to analysis and reformulation of the text (Santiago, 2004). Moreover, consecutive interpreters have time to take notes which serves as a very effective tool of the interpreters. In simultaneous interpreting, the interpreter sits in a booth overlooking the meeting room. The speeches given are interpreted simultaneously and relayed to delegates by means of the sound equipment. Although most people think of the Nuremburg Trials after World War II as the birth of simultaneous interpretation (English, French, Russian, and German), in fact, the concept of simultaneous interpretation was born in the US and existed for some time before there was any large-scale demand for it. As early as 1924, Edward Filene, the Boston capitalist and social reformer, sponsored the use of simultaneous interpretation during entire official meetings of the International Labor Organization, and for more languages than the four used in Nuremberg. His goal was to find an alternative to consecutive conference interpretation. The Nuremberg Trials, although not the first example of simultaneous interpretation, did have radical consequences for the profession. At that time, consecutive interpretation, which had been in use since the 1919 Paris Peace Conference, was the standard at international gatherings, such as at meetings of the League of Nations in Geneva, where English and French were used. The Nuremberg trials changed all that.
RELEVANT FEATURES OF SIMULTANEOUS INTERPRETING Divided attention
The interpreter, like the translator, is both a receiver and a producer of text but, in the case of the former, the near simultaneity of the reception and production processes and the fact that there is no opportunity for working on successive drafts of text output create differences which are important in terms of performance. Speaking at the same time as the source text producer, interpreters have to run several processing activities concurrently. In addition to processing current input, they have to translate the immediately preceding input, encode their own output and monitor it (the interpreter's headset incorporates feedback from microphone to earpiece of his/her own voice so that output can be monitored). Time available for evaluative or reflective listening is thus curtailed. Shlesinger (1995) notes that this constraint entails a trade-off among the separate
2
components of the task. For example, if syntactic processing becomes especially burdensome at a particular juncture, then time available for, say, lexical searching will be reduced (Gile).
Decalage/Ear-voice span
The necessary time-lag between reception of source text and production of target text has been called the ear-voice span (EVS Gerver (1976), Goldman-Eisler (1980)) and is said to vary from two to ten seconds approximately, depending for example on individual style, on syntactic complexity of input and on language combination. Variations in EVS can, of course, be taken as a rough measure of the size of the stretch of source text currently being processed. In general terms, the shorter the EVS, the closer will the translation adhere to the form of the source text. The correspondence is however not absolute. But whereas EVS is at least measurable, the length of text being processed at any given time during written translation is not observable in the same way. Thus, some insight into the translator's mode of operation is available in simultaneous interpreting. Most importantly, EVS imposes strain on short-term memory and, if it is allowed to become too long, breakdown can occur.
Audience design
Bell (1984) drew attention to the ways in which text producers adapt their output to what he called audience design, that is, the perceived receiver group. It is important to realize that the interpreter, as a receiver of the source text, is not the intended addressee. But speakers accommodate to their addressees in a variety of ways. As Shlesinger (1995) points out, speakers at a specialist conference gear their output to an expected level of specialized knowledge on the part of their audience, knowledge which the interpreter would often not share. Speakers also rely on feedback from their addressees, judging the extent to which even a very passive audience is following, becoming involved, losing interest, etc. In most cases, feedback from the interpreters in their booths will not be available (or even of interest) to the speaker. Thus the interpreter cannot be said to be a ratified participant in the speech event but, rather, an overhearer ( Bell 1984). Further, speeches for simultaneous translation tend to be of a particular kind. In many cases, the mode of the source text will be written-to-be-read-aloud and the propositional content will be non-trivial with sustained and planned development of a single topic. Pace of delivery will of course be affected by whether the source text is spontaneous speech or written text (and may even be influenced by the fact that the text is to be simultaneously translated). But it will not be affected by the pressures of face-to-face interaction. Indeed, the simultaneous interpreter is in a totally
3
different situation from that of the participant in a speech exchange who negotiates meaning with an interlocutor. The interpreter is rather what we may call an 'accountable listener', in the sense that the product of their listening is held up for scrutiny in a way which the ordinary listener is not subject to. And the interpreter's response will not be one of interaction with an interlocutor but rather of sympathetic impersonation of a source text speaker-with a similar group of addressees in mind to that of the speaker.
Continuous response
A further concomitant of the situation is that, given the requirement of divided attention and immediacy of response, the simultaneous interpreter concentrates on processing only current input. In other words there is likely to be less matching of current input with previous text than is the case in other forms of processing such as listening to a monologue or, especially, reading. Whereas co-textual clues do form an important part of the interpreter's understanding of text, preference is probably granted to the immediate pre-text over earlier text segments. Studies have shown that recall of verbal material is less after simultaneous interpreting than after other forms of processing, probably due to phonological interference between input and output (Dar and Fabbro 1994). The simultaneous interpreter relies on textural signals. Context is muted because the interpreter is not a ratified participant in the speech event and because the constraints of immediacy of response and the focus on short units deny the interpreter the opportunity for adequate top-down processing. The same constraints-only a very small segment of text in active storage, the narrower processing channel-affect appreciation of structure. Structure is then something which may be inferred from textural clues such as those to be listed below but it is not available to the receiver in its entirety in the same way as it is to the consecutive interpreter or the receiver of written texts.
PROCESSES IN SIMULTANEOUS INTERPRETING Daniel Gile emphasizes the difficulties and efforts involved in interpreting tasks and strategies needed to overcome them, observing that many failures occur in the absence of any visible difficulty. He then proposes his Effort Models for interpreting. He says that "The Effort Models are designed to help interpreters understand these difficulties of interpreting and select appropriate strategies and tactics. Gile's Effort Model for Simultaneous Interpreting is: SI=L+M+P
SI=Simultaneous Interpreting
L=Listening and Analysis, which includes "all the mental operations between perception of a discourse by auditory mechanisms and the moment at which the interpreter either assigns, or decides not to assign, a meaning (or several potential meanings) to the segment which he has heard." M=Short-term Memory, which includes "all the mental operations related to storage in memory of heard segments of discourse until either their restitution in the target language, their loss if they vanish from memory, or a decision by the interpreter not to interpret them." P=Production, which includes "all the mental operations between the moment at which the interpreter decides to convey a datum or an idea and the moment at which he articulates (overtly produces) the form he has prepared to articulate".
According to Gile, the process of interpreting could be re-postulated into: 1. 2. 3. 4. Encoding of information from the Source Language/Understanding Storing Information Retrieval of Information Decoding Information into the Target language.
1- Understanding /Encoding
Understanding means converting discourse i.e words signs into sense. Cognitive inputs of several kinds make this possible. Native listeners and readers are usually not aware of the way in which cognitive inputs shape our understanding. Language alone seems to be present, but situational, contextual and world knowledge come into play quite naturally. In every day conversation, when listening to each other, the part played by knowledge of language is difficult to distinguish from that played by background information. However, we sometimes realize we lack some knowledge other than that of language in order to understand fully what we listen to. Thus background knowledge associated with language plays a role in understanding discourse.
2-
Storing information /Memory
In Consecutive Interpreting, there is probably up to 15 minutes (depending on the speaker's segments) for the interpreter to encode and then store the information. This is the first phase of Gile's Effort Model for CI. In the second phase of Gile's Model, the interpreter starts to retrieve information and decode it into the target language. In SI, encoding and decoding of information happen almost at the same time. The duration for storing the information is very limited. Therefore, in the first step of interpreting, encoding (understanding) information uttered in the SL is the key to memory training. According to the previous description, there are three main possibilities of storing information in STM: (1) Acoustic Coding; (2) Visual Coding and (3) Semantic Coding. Visual coding may be used by interpreters in conference situations with multimedia. Notes in interpreting are to assist in such visual coding of information. But in most interpreting contexts, interpreters will depend on acoustic and semantic coding. "The interpreter needs a good short-term memory to retain what he or she has just heard and a good long-term memory to put the information into context. Ability to concentrate is a factor as is the ability to analyze and process what is heard" (2001). Gile emphasizes that the memory effort is assumed to stem form the need to store the words of a proposition until the hearer receives the end of that proposition. The storage of information is claimed to be particularly demanding in SI, since both the volume of information and the pace of storage and retrieval are imposed by the speaker. In both models, Gile emphasizes the significance of Short-term Memory. It is actually one of the specific skills which should be imparted to trainees in the first stage of training. Among all the skills and techniques which are required for a good interpreter, memory skill is the first one which should be introduced to trainee interpreters. A skillful interpreter is expected to have a powerful memory. Memory functions differently in consecutive and simultaneous interpreting, because the duration of memory is longer in CI than in SI. There are different methods of training STM for CI and SI respectively. Interpreting starts with the encoding of the information from the original speaker. According to Gile's Effort Model, interpreting is an STM-centered activity; Short Term vs. Long Term Memory Psychological studies of human memory make a distinction between Short-Term Memory (STM) and Long-Term Memory (LTM). The idea of short-term memory simply means that you are retaining information for a short period of time without creating the neural mechanisms for later recall. Long-Term Memory occurs when you have created neural pathways for storing ideas and information which can then be recalled weeks, months, or even years later. To create these pathways, you must make a deliberate attempt to encode the information in the way you intend to recall it later. Long-term memory is a learning process. And it is essentially an important part of the interpreter's acquisition of knowledge, because information stored in LTM may last for minutes to weeks, months, or even an
6
entire life. The duration of STM is very short. It is up to 30 seconds. Peterson (1959) found it to be 6 - 12 seconds, while Atkinson and Shiffrin (1968) and Hebb (1949) state it is 30 seconds. Memory in interpreting only lasts for a short time. Once the interpreting assignment is over, the interpreter moves on to another one, often with different context, subject and speakers. Therefore, the memory skills which need to be imparted to trainee interpreters are STM skills.
Major Features of STM Input of information: It is generally held that information enters the STM as a result of applying attention to the stimulus, which is about a quarter of a second according to the findings of both Sperling and Crowden. However, McKay's findings do not fully support this, asserting that unattended information may enter the STM. Capacity: As mentioned in the previous section, the capacity of STM is limited and small. Atkinson and Shiffrin (1968) propose that it is seven items of information (give or take two). Miller (1956) says it is seven "chunks." Another possibility may be that the limiting factor is not the STM's storage capacity, but its processing capacity (Gross). Modality: To store information in STM, it must be encoded, and there is a variety of possibilities as to how this operates. There are three main possibilities in STM: (1) Acoustic (Phonemic) coding is rehearsing through sub-vocal sounds (Conrad and Baddeley). (2) Visual coding is, as implied, storing information as pictures rather than sounds. This applies especially to nonverbal items, particularly if they are difficult to describe using words. In very rare cases some people may have a "photographic memory," but for the vast majority, the visual code is much less effective than this (Posner and Keele). (3) Semantic coding is applying meaning to information, relating it to something abstract (Baddeley, Goodhead) Information Loss: There are three main theories as to why we forget from our STM: (1) Displacementexisting information is replaced by newly received information when the storage capacity is full (Waugh and Norman) (2) Decayinformation decays over time (Baddeley, Thompson and Buchanan). (3) Interferenceother information present in the storage at the same time distorts the original information (Keppel and Underwood ). Retrieval: There are modes of retrieval of information from STM: (1) Serial search items in STM are examined one at a time until the desired information is retrieved (Sternberg). (2) Activationdependence on activation of the particular item reaching a critical point (Monsell, Goodhead). Human brain has evolved to encode and interpret complex stimuliimages, color, structure, sounds, smells, tastes, touch, spatial awareness, emotion, and languageusing them to make sophisticated interpretations of the environment. Human memory is made up of all these features.
3- Decoding /deverbalization of information
This process is related to the dissociation of languages or mental representation in the mind, which is then verbalized in the other language. Chesterman (2002) explains the phenomenon as follows: It means simply that a translator or interpreter has to get away from the surface structure of the source text, to arrive at the intended meaning, and then express this intended meaning in the target language. In other words, deverbalization is a technique used to avoid unwanted formal interference. Without going into a discussion of whether deverbalization is a technique or part of the translating process, one thing is certain: although in the past deverbalization as a concept has been the object of much criticism, today this stage is recognized as indispensable not only by interpreters but also by translators, because it makes it much easier to discover modes of expression that are not influenced by the original language. Carrying sense over to the other language- that having deverbalized, in other words having left aside the words and structure of the source-language text, interpreters proceed to express a sense that they have internalized, as they would in monolingual communication when creating a new message.
4- Reformulation
When the sense of the message has become sufficiently clear, and the original wording is forgotten, new questions will have to be answered: For whom are we going to interpret ? And how? Interpretation has to make sense for the audience, who will mostly not have the same cultural background as those for whom the original message was intended. This will often require explicitation, implicitation, changes in text structure, etc.