0% found this document useful (0 votes)
248 views16 pages

Simultaneous Interpreting (SI) The Holy Grail of Artificial Intelligence - An SI Practitioner's Perspective

The document discusses simultaneous interpreting and whether artificial intelligence is capable of this complex task. It defines simultaneous interpreting as involving comprehension of meaning, removing it from the source language, and reformulating it in the target language. This process requires understanding nuances, context, and implied meanings that are beyond current AI capabilities. While AI has made advances, human intelligence is still irreplaceable for simultaneous interpreting given its reliance on pragmatic information, empathy, and understanding beyond explicit rules and data.

Uploaded by

Emineyilmaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
248 views16 pages

Simultaneous Interpreting (SI) The Holy Grail of Artificial Intelligence - An SI Practitioner's Perspective

The document discusses simultaneous interpreting and whether artificial intelligence is capable of this complex task. It defines simultaneous interpreting as involving comprehension of meaning, removing it from the source language, and reformulating it in the target language. This process requires understanding nuances, context, and implied meanings that are beyond current AI capabilities. While AI has made advances, human intelligence is still irreplaceable for simultaneous interpreting given its reliance on pragmatic information, empathy, and understanding beyond explicit rules and data.

Uploaded by

Emineyilmaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Lebende Sprachen 2017; 62(2): 253–268

Ailing Zhang*
Simultaneous Interpreting (SI): the Holy
Grail of Artificial Intelligence – An SI
Practitioner’s Perspective
[Link]

Abstract: Artificial Intelligence (AI) has been become a household expression,


especially in the past couple of years thanks to Google’s AI Computer program
AlphaGo defeating a couple of world-class Go masters from Korea and China. In
recent years, machines have surpassed humans in the performance of certain
specific tasks, such as some aspects of image recognition. Although it is unlikely
that machines will exhibit broadly-applicable intelligence comparable to or ex-
ceeding that of humans in the near future, experts forecast that rapid progress in
the field of specialized AI will continue, with machines reaching and exceeding
human performance on an increasing number of tasks. Simultaneous interpret-
ing, being among the most complex of human cognitive/linguistic activities, with
all the associated ergonomic elements, has been discussed profusely as one of the
most likely to be taken over by AI in a couple of years. Given that so much has to
be there simultaneously, i.e. anticipation, restoration of the implicit-explicit

balance, and communicative re-packaging (‘re-ostension’1) of the discourse,


simultaneous interpreting (SI) has yet to be fully explained, not the least the rich
pragmatic information guiding the construction of the appropriate contexts and
the speaker’s underlying intentionalities, which is the centre piece of SI, given the
illusive nature of meaning assembly could be quite beyond even the most intelli-
gent robots. This paper discusses what SI is like, how AI has evolved, and
concludes that AI is dramatically changing the profession, but given the rule-
based nature of AI driven by data and algorithms, human intelligence as pos-
sessed by SI practitioners is still irreplaceable. SI practitioners can harness
cutting-edge technologies such as AI to do a better job, which bears significance
for trainers and would be SIs.

1 Simultaneous Interpretation: a cognitive-pragmatic [Link] from: [Link]


[Link]/publication/305392001_Simultaneous_Interpretation_a_cognitive-pragmatic_a
nalysis [accessed Jun 8, 2017].

*Kontaktperson: Prof. Dr. Ailing Zhang, Shanghai International Studies University, 550 Dalian Xi
Road, Shanghai 200083, P. R. China, E-Mail: azhang@[Link]
254 Ailing Zhang

Keywords: Simultaneous Interpreting (SI), Artificial Intellegence (AI), human


intellegence (HI), Algorithms, Big data

Technology advancement has gone up exponentially in the past couple of years,


and the year 2016 may be a tipping year, because of the explosive development of
big data technology. Hence the euphoria in the IT sector. At the June FIT con-
ference in China’s Xi’an, Eric Yu from GlobalTone told almost a thousand partici-
pants from international organizations (IOs), the government, industries and
academia that interpreters would be extinct in 5 years’ time, a forecast that
interpreters/translators frowned upon. Then on 17 November, at the third World
Internet Conference China’s Wuzhen, attended by mainly leaders of tech giants
such as Alibaba, Baidu, and Tencent, the CEO of Sogou, China’s third largest
search engine said to be eyeing its IPO in NASDAQ in 2017, gave a talk on the
future of search engines. In the course of his speech, he conducted a demo of the
real-time transcripts appearing in the text on the screen (as shown in Figure 1),
both Chinese transcription and English translation, simultaneously so to speak,
and remarked that “In the future, simultaneous interpreters may be unem-
ployed!”

Figure 1

(English on the screen: Say what? Because of this time, the use of statistics is not
strong enough to support. It is necessary to go down. ....The understanding of the
specific concepts in the sentence is able to eliminate the ambiguity.)
Ironically though, as Figure 1 shows, the English version from the original
Chinese doesn’t seem to make sense and looks like gibberish resulting from
mechanical word-for-word trans-coding. It was sensible of Mr. Wang to also talk
about the current “short board” (literal translation from the Chinese duan3ban3
短板,meaning shortfall or deficiency) of artificial intelligence, by providing the
Simultaneous Interpreting and AI 255

example that Sogou voice recognition in a quiet environment has reached 95 %  

accuracy, or even 97 %, but once there is noise, e.g. when two people speak at the
   

same time, rapid decline would occur. The machine doesn’t seem to know how to
identify, and today’s academia has found yet no solution. After all, the human
intelligence of simultaneous interpreters remains indispensible in understanding
the nuances, irony, allusions, let alone the empathy, emotions and prosodic
features, all of which are beyond the rule-based algorithms feeding on big data.

1 What is simultaneous interpreting like, and is AI


up to the task?
Simultaneous interpreting is defined as the most complex cognitive communica-
tion task capable of being done by a limited number of well-trained people with
the right gift and aptitude. The way it works and the aptitudes thus required make
it distinct from most other activities.

i How does it work?

Simultaneous interpreting is more than just trans-coding. Selescovitch (1995)


describes it as involving a triangular process: the comprehension of the meaning
in the source language, deverbalization and reformulation of the same meaning
in the target language. The meaning is consisted of the generated linguistic mean-
ing at the sound of speakers and the cognitive supplement from the interpreter
closely related to the linguistic meaning.
Selescovitch came up with the hypothesis of interpreting triangular model
which implies the following: interpreting is a ternary process (see Figure 2): first
comes listening in the source language, then the meaning and sense of the
discourse (the object of interpreting) is perceived, after which comes the final
stage (the most important one in interpreting), the reformulation of the acquired
meaning in the target language. Trans-coding, however, only applies to the
simultaneous interpreting of terms, numbers, and names. The process is not
straightforward, but pass through various phases, i.e. it is an active process clus-

tered around the “understanding” and then “re-expression” of ideas.


256 Ailing Zhang

Figure 2: Interpreting is a ternary process

Delisle (1988) further argues that since language and thought are separate entities,
interpreters decode the discourse at the participation of their cognitive knowl-
edge, flay the meaning they acquire from its original linguistic form, and store it
their brain in a non-verbal form. He divides this process into three stages:

1) Understanding
The interpreter does not explicitly express what s/he wants to transmit, since
situation or context plays a major role in the interpretation of the implicit. Inter-
preters differ from the ordinary reader, for they have this skill of capturing the
meaning embodied between the lines by reference to the contextual value of each
word, which makes them able to collect the meaning in its entirety.

2) Contextualization
To interpret means to cross the bridge between different cultures and language -
systems and here comes the role of the cognitive complements: the non-linguisti-
c elements that contribute in the process of understanding, which include all
that is conceptual, cultural and aesthetic-emotional, added to the verbal signals
forming the general contextual dimensions (verbal, situational and cognitive
context). Interpreting requires a sharp understanding of the original beyond lexi-
cal constrains, but it goes far beyond the linguistic framework to evoking his/her
knowledge and skills on analyzing the discourse in its general context (See the
graphic representation in Figure 3) .
Simultaneous Interpreting and AI 257

Figure 3: A Graphic Representation of Interpreting from The Source Language to The Target
Language

At the de-verbalisation stage, when sense is freed from all linguistic structures


of the source language, the interpreter is then in the quest for new linguistic -
structures matching the target language, a very important step, as it enables its
user to avoid overlap between the two languages during the re-expression
stage, hence, resulting in a smooth and fair interpretation. So, during re-expres-
sion, the interpreter must pay attention to the problem of overlap, and reformula-
te by avoiding as much as possible the interference that exists between the dif-
ferent systems of languages.

3) Reformulation
One may ask what is re-expressed exactly. Is it a reformulation of the cultural -
component? Or is it a reformulation of the linguistic one? Or is it, rather, a refor-
mulation the communicative meaning? The answer is that the interpreter may
enclose more than what was originally said in the text and explain if necessary,
but without adding anything new to the core meaning. S/he can change the
cultural image to approximate the content to the receiving public, but without -
changing the function of this very image.
In other words, much is left to the discretion of the interpreter, who has to be
able to perform mental gymnastics, to edit where the speaker makes slips of
tongue, and even when a written text is provided, the interpreter still has to
“check against delivery” as the speaker may decide to skip, add or revise, or insert
some anecdotes to illustrate the intended message. Figure 5 shows just a few
example pages that the present author received in the booth with her team while
interpreting for the 31st Session of the Human Rights Council (HRC). A kind note
of caution to “check against delivery” was on the top right-hand corner of each
script page, and each of the speakers did deviate from the script at some point
while ripping through the page. That’s why simultaneous interpreters have to be
258 Ailing Zhang

good at multi-tasking, i.e. listen, read, check and interpret all at the same time.

Simultaneous interpreting with text may be one task that the interpreters would
be more than happy to have AI on board.

Figure 4: “Check against delivery”

ii What is required of the SIs?

On November 18, 1956, while addressing Western ambassadors at a reception at


the Polish embassy in Moscow, the Soviet premier Nikita Khrushchev told wes-
tern block ambassadors “Мываспохороним”. Viktor Sukhodrev, Khrushchev’s
personal interpreter, rendered that into English as “We will bury you.” This
statement sent shock waves through the western world, heightening the tension
between the Soviet Union and the US who were in the thick of a cold war. Some
believe this incident alone set East-West relations back a decade. As it turns out,
Khrushchev’s remark was interpreted a bit too literally. Given the context, his
words should have been rendered as “We will live to see you buried.”, meaning
that communism will outlast capitalism, a less threatening comment. Though the
intended meaning was eventually clarified, the initial impact put the world on a
path that could have led to Nuclear Armageddon.
Indeed given the complexities of exchanges across languages and cultures,
how does this sort of thing not happen all the time? Much of the answer lies with
Simultaneous Interpreting and AI 259

the skill and training of interpreters to overcome language barriers. For most of
history, interpretation was mainly done consecutively with speakers and inter-
preters making pauses to allow each other to speak. But after the advent of radio
technology, a new simultaneous interpretation system was developed in the wake
of World War Two. In the simultaneous mode, interpreters instantaneously inter-
pret a speaker’s words into a microphone while he speaks without pauses. Those
in the audience can choose the language in which they want to follow. On the
surface, it all looks seamless, but behind the scenes, human interpreters work
incessantly to make sure every idea gets across as intended. And that is no easy
task. It takes about two years of training for already fluent bilingual professionals
to expend their vocabulary and master the skills necessary to become a conference
interpreter. To get used to the unnatural task of speaking while listening, students
shadow speakers and repeat their every word exactly as heard in the same
language. In time, they begin to paraphrase what is said, making stylistic adjust-
ments as they go. At some point, a second language is introduced. Practicing in
this way creates new neural pathways in the interpreter’s brain. And the constant
effort of reformulation gradually becomes second nature. Over time and through
much hard work, the interpreter masters a vast array of tricks to keep up with
speed, deal with challenging terminology and handle a multitude of foreign
accents. They may resort to acronyms to shorten long names, choose generic terms
over specific or refer to slides and other visual aids. They may even leave a term in
the original language while they search for the most accurate equivalent. Inter-
preters are also skilled at keeping calm in face of chaos. They have no control of
who is going to say what, or how articulate the speaker will sound. A curve ball
can be thrown at any time. Also they often perform to thousands of people in very
intimidating settings like the UN general assembly. To keep their emotions in
check, they carefully prepare for an assignment, building glossaries in advance,
reading veraciously about the subject matter, and reviewing previous talks on the
topic.
Finally interpreters work in pairs. While one colleague is busy handling
incoming speeches in the real time, the other gives support by locating documents,
looking up words and tracking down pertinent information. Because simultaneous
interpretation requires intense concentration, every 30 minutes the pair switches
roles. Success is heavily dependent on skillful collaboration. Language is com-
plex. And when abstract and nuanced concepts get lost in translation, the con-
sequences may be catastrophic. As Margaret Atwood famously noted “war is what
happens when language fails”, conference interpreters of all people are most
aware of that and work diligently behind the scenes to make sure it never does.
Flexibility, quick reflexes, responsiveness, and a sound sense of ethics, these
are the ultimate characteristics of a good interpreter. And new issues come to
260 Ailing Zhang

confront present-day interpreters, such as the issue of professional impartiality.


Unlike staff interpreters in the UN or the EU or other international organizations
who have total neutrality guaranteed, colleagues work either in house for certain
institutions, or free-lancers, there’s no way that interpreters are neutral. They are
hired. One has to be a Chinese national to work for Chinese Foreign Ministry,
likewise French for Foreign Ministry. The interpreter works for them, not for the
other side. They’re referred to as affiliated interpreters. How would robots be able
to exercise such delicate discretion? The message part can only be deciphered by
the mind-reader, i. e. well-trained human interpreters.
With the rise of AI, will the jobs of simultaneous interpreters be among the
last to go?

2 AI: past, present and the future


What is Artificial Intelligence? There is no single definition of AI that is universally
accepted by practitioners. Some define AI loosely as a computerized system that
exhibits behavior that is commonly thought of as requiring intelligence. Others
define AI as a system capable of rationally solving complex problems or taking
appropriate actions to achieve its goals in whatever real world circumstances it
encounters. Experts offer differing taxonomies of AI problems and solutions. A
popular AI textbook used the following taxonomy: (1) systems that think like
humans (e.g., cognitive architectures and neural networks); (2) systems that act

like humans (e.g., pass the Turing test via natural language processing; knowledge

representation, automated reasoning, and learning), (3) systems that think ration-


ally (e.g., logic solvers, inference, and optimization); and (4) systems that act

rationally (e.g., intelligent software agents and embodied robots that achieve goals

via perception, planning, reasoning, learning, communicating, decision-making,


and acting). Separately, venture capitalist Frank Chen (2016) broke down the
problem space of AI into five general categories: logical reasoning, knowledge
representation, planning and navigation, natural language processing, and per-
ception. And AI researcher Pedro Domingos (2015) ascribed AI researchers to five
“tribes” based on the methods they use: “symbolists” use logical reasoning based
on abstract symbols, “connectionists” build structures inspired by the human
brain; “evolutionaries” use methods inspired by Darwinian evolution; “Baye-
sians” use probabilistic inference; and “analogizers” extrapolate from similar
cases seen previously.
This diversity of AI problems and solutions, and the foundation of AI in
human evaluation of the performance and accuracy of algorithms, makes it
difficult to clearly define a bright-line distinction between what constitutes AI and
Simultaneous Interpreting and AI 261

what does not. For example, many techniques used to analyze large volumes of
data were developed by AI researchers and are now identified as “Big Data”
algorithms and systems. In some cases, opinion may shift, meaning that a
problem is considered as requiring AI before it has been solved, but once a
solution is well known it is considered routine data processing. Although the
boundaries of AI can be uncertain and have tended to shift over time, what is
important is that a core objective of AI research and applications over the years
has been to automate or replicate intelligent behavior.

i A Brief History of AI

Endowing computers with human-like intelligence has been a dream of computer


experts since the dawn of electronic computing. Although the term “Artificial
Intelligence” was not coined until 1956, the roots of the field go back to at least
the 1940 s (Warren S. McCulloch and Walter H. Pitts, 1943:115–133,), and the idea

of AI was crystalized in Alan Turing’s famous 1950 paper, “Computing Machinery


and Intelligence.” Turing’s paper posed the question: “Can machines think?” It
also proposed a test for answering that question, and raised the possibility that a
machine might be programmed to learn from experience much as a young child
does. In the ensuing decades, the field of AI went through ups and downs as some
AI research problems proved more difficult than anticipated and others proved
insurmountable with the technologies of the time. It wasn’t until the late 1990 s  

that research progress in AI began to accelerate, as researchers focused more on


sub-problems of AI and the application of AI to real-world problems such as
image recognition and medical diagnosis. An early milestone was the 1997 victory
of IBM’s chess-playing computer Deep Blueover world champion Garry Kasparov.
Other significant breakthroughs included DARPA’s Cognitive Agent that Learns
and Organizes (CALO), which led to Apple Inc.’s Siri; IBM’s question-answering
computer Watson’s victory in the TV game show “Jeopardy!”; and the surprising
success of self-driving cars in the DARPA Grand Challenge competitions in the
2000 s.

The current wave of progress and enthusiasm for AI began around 2010, driven
by three factors that built upon each other: the availability of big data from sources
including e-commerce, businesses, social media, science, and government; which
provided raw material for dramatically improved machine learning approaches
and algorithms; which in turn relied on the capabilities of more powerful compu-
ters. During this period, the pace of improvement surprised AI experts. For exam-
ple, on a popular image recognition challenge that has a 5 percent human error
rate according to one error measure, the best AI result improved from a 26 percent
262 Ailing Zhang

error rate in 2011 to 3.5 percent in 2015. Simultaneously, industry has been increas-
ing its investment in AI. In 2016, Google Chief Executive Officer (CEO) Sundar
Pichai said, “Machine learning [a subfield of AI] is a core, transformative way by
which we’re rethinking how we’re doing everything. We are thoughtfully applying
it across all our products, be it search, ads, YouTube, or Play. And we’re in early
days, but you will see us—in a systematic way— apply machine learning in all these

areas.” This view of AI broadly impacting how software is created and delivered
was widely shared by CEOs in the technology industry

ii The Current State of AI

Remarkable progress has been made on what is known as Narrow AI, which
addresses specific application areas such as playing strategic games, language
translation, self-driving vehicles, and image recognition. Narrow AI is not a single
technical approach, but rather a set of discrete problems whose solutions rely on
a toolkit of AI methods along with some problem-specific algorithms. The diver-
sity of Narrow AI problems and solutions, and the apparent need to develop
specific methods for each Narrow AI application, has made it infeasible to “gen-
eralize” a single Narrow AI solution to produce intelligent behavior of general
applicability. Narrow AI underpins many commercial services such as trip plan-
ning, shopper recommendation systems, and ad targeting, and is finding impor-
tant applications in medical diagnosis, education, and scientific research.
General AI (sometimes called Artificial General Intelligence, or AGI) refers to
a notional future AI system that exhibits apparently intelligent behavior at least
as advanced as a person across the full range of cognitive tasks. A broad chasm
seems to separate today’s Narrow AI from the much more difficult challenge of
General AI. Attempts to reach General AI by expanding Narrow AI solutions have
made little headway over many decades of research.
Expert opinion on the expected arrival date of AGI ranges from 2030 to
centuries from now. There is a long history of excessive optimism about AI. For
example, AI pioneer Herb Simon predicted in 1957 that computers would outplay
humans at chess within a decade, an outcome that required 40 years to occur.
Early predictions about automated language translation also proved wildly opti-
mistic, with the technology only becoming usable (and by no means fully fluent)
in the last several years. It is tempting but incorrect to extrapolate from the ability
to solve one particular task to imagine machines with a much broader and deeper
range of capabilities and to overlook the huge gap between narrow task-oriented
performance and the type of general intelligence that people exhibit.
Simultaneous Interpreting and AI 263

iii Machine Learning

Machine learning is one of the most important technical approaches to AI and the
basis of many recent advances and commercial applications of AI. In a sense,
machine learning is not an algorithm for solving a specific problem, but rather a
more general approach to finding solutions for many different problems, given
data about them. To apply machine learning, a practitioner starts with a historical
data set, which the practitioner divides into a training set and a test set. The
practitioner chooses a model, or mathematical structure that characterizes a range
of possible decision-making rules with adjustable parameters. A common analogy
is that the model is a “box” that applies a rule, and the parameters are adjustable
knobs on the front of the box that control how the box operates. In practice, a
model might have many millions of parameters. The practitioner also defines an
objective function used to evaluate the desirability of the outcome that results
from a particular choice of parameters. The objective function will typically
contain parts that reward the model for closely matching the training set, as well
as parts that reward the use of simpler rules. Training the model is the process of
adjusting the parameters to maximize the objective function. Training is the
difficult technical step in machine learning. A model with millions of parameters
will have astronomically more possible outcomes than any algorithm could ever
hope to try, so successful training algorithms have to be clever in how they explore
the space of parameter settings so as to find very good settings with a feasible level
of computational effort. Once a model has been trained, the practitioner can use
the test set to evaluate the accuracy and effectiveness of the model. The goal of
machine learning is to create a trained model that will generalize—it will be
accurate not only on examples in the training set, but also on future cases that it
has never seen before. While many of these models can achieve better-than-
human performance on narrow tasks such as image labeling, even the best models
can fail in unpredictable ways. For example, for many image labeling models it is
possible to create images that clearly appear to be random noise to a human but
will be falsely labeled as a specific object with high confidence by a trained model.

iv Deep Learning

In recent years, some of the most impressive advancements in machine learning


have been in the subfield of deep learning, also known as deep network learning.
Deep learning uses structures loosely inspired by the human brain, consisting of
a set of units (or “neurons”). Each unit combines a set of input values to produce
an output value, which in turn is passed on to other neurons downstream.
264 Ailing Zhang

Figure 5: Simulated neuron layers

Deep learning networks typically use many layers—sometimes more than 100—
and often use a large number of units at each layer, to enable the recognition of
extremely complex, precise patterns in data. In recent years, new theories of how
to construct and train deep networks have emerged, as have larger, faster compu-
ter systems, enabling the use of much larger deep learning networks. The dra-
matic success of these very large networks at many machine learning tasks has
come as a surprise to some experts, and is the main cause of the current wave of
enthusiasm for machine learning among AI researchers and practitioners.

v Ergonomics: Human-Machine Team

Unlike automation, where a machine substitutes for human work, in some cases a
machine will complement human work. This may happen as a side-effect of AI
development, or a system might be developed specifically with the goal of creating
a human-machine team. Systems that aim to complement human cognitive cap-
abilities are sometimes referred to as intelligence augmentation. In many applica-
tions, a human-machine team can be more effective than either one alone, using
the strengths of one to compensate for the weaknesses of the other. One example
is in chess playing, where a weaker computer can often beat a stronger computer
player, if the weaker computer is given a human teammate—this is true even
though top computers are much stronger players than any human. Another
example is in radiology. In one recent study, given images of lymph node cells,
and asked to determine whether or not the cells contained cancer, an AI-based
approach had a 7.5 percent error rate, where a human pathologist had a 3.5 per-
cent error rate; a combined approach, using both AI and human input, lowered
the error rate to 0.5 percent, representing an 85 percent reduction in error.
Simultaneous Interpreting and AI 265

vi Timeline?

A group of researchers from Oxford University and Yale’s Department of Political


Science published a paper titled “When Will AI Exceed Human Performance?
Evidence from AI Experts” on May 24, 2017. To answer this question, the group ran
a survey involving 352 machine learning researchers, or people who work on
making machines smarter. The respondents were asked to estimate the year when
machines would achieve so-called “high-level machine intelligence” (HLMI), when
“unaided machines can accomplish every task better and more cheaply than hu-
man workers.”
266 Ailing Zhang

Of course, in addition to playing Angry Birds, folding laundry, writing high school
essays, composing a Top 40 pop song, or even beating AI researchers at their own
game, language translation featured among the activities that participants were
asked to give a date for when machines will beat humans.
To be precise, the researchers hedged their question somewhat, asking when
computers would “perform translation about as good as a human who is fluent in
both languages but unskilled at translation, for most types of text, and for most
popular languages (including languages that are known to be difficult, like
Czech, Chinese, and Arabic).” For this type of (amateur) translation, the year is
2024, according to the survey. The survey also asked respondents to set a date for
two other types of translation. In translating “speech in a new language given
only unlimited films with subtitles in the new language,” machines will overtake
humans by 2026, respondents said.
And a further six years into the future by 2032, computers will be able to
“translate a text written in a newly discovered language into English as well as a
team of human experts, using a single other document in both languages (like a
Rosetta stone). Suppose all of the words in the text can be found in the translated
document, and that the language is a difficult one.”

3 AI and SI
Language translation has long been considered the holy grail of AI. In an article
for Atlantic magazine published last year, computer programmer James Somers
provides an in-depth history of machine translation. The initial method involved
bringing together professional linguists in a room as developers tried to “trans-
late” their knowledge into a set of rules that computer program could understand.
This rule-based approach inevitably failed, language being “too big and too
protean; for every rule obeyed, there’s a rule broken.” IBM developed a more
successful approach to machine translation in the late-1980s with a project called
Candide, using a technique known as “machine learning,” which has gone on to
become the cornerstone of AI.
Essentially, you feed the machine data – or in this case millions of sentences,
both in the source and target language – assign the right translations to every
word, develop an algorithm that processes how often a certain word follows
another and test it over and over again. For every mistake the machine makes,
corrections are input and the algorithm is adapted.
Google Translate essentially runs on the same technique, only feeding the
machine virtually unfathomable amounts of data. (As Somers points out, who,
after all, owns more data than Google?) Until now, this seems to have worked well
Simultaneous Interpreting and AI 267

enough – old machine learning techniques matched with newly adapted algo-
rithms and literally trillions of word and sentence combinations have given us the
Google Translate we know today – which is, well, sensible but by no means of a
professional standard. Phrases translated with the aid of Google’s translations
algorithms still have knack for jumping off the page.
The search giant’s acquisition of DeepMind Technologies signals its intention
to adapt new machine learning algorithms for its search and translation func-
tions, drawing faster and more accurate results. Google said of its work on
language, translation and speech processing:
“In all of those tasks and many others, we gather large volumes of direct or
indirect evidence of relationships of interest, and we apply new algorithms to
generalise from that evidence to new cases of interest.”
AI is still based on behavioral predictions, which works in games and simula-
tions, but language is as much behavioral as it is a product of intelligence and
brain processing. Language, after all, is malleable: new metaphors and idioms
are conceived every day in every language, meanings shift and nuances rarely
sound as resonant in other languages. Google translate will no doubt improve
over the coming years, with demand for quick and free translation ever rising.
However, based on interviews with Google Translate developers, the more ma-
chine translation improves and edges closer to the level of a professional transla-
tor/interpreter, the steeper the road becomes.
iFLYTEK Co, which provides the world’s first open intelligent interactive
technology service platform, has made a name for itself through several demos of
its voice recognition software which provides “interpreting” between Chinese and
English, as is shown in the following pictures of their new products. “In: You’re
really generous. Out: 你真的很大方。” And “In: 我想请你吃饭。 Out: I want to
invite you to dinner.” And the subject matter is confined to travel, shopping,
dining, partying, amongst others.

Despite the advancements made in AI, it remains overcome by the human mind’s
capacity for intelligence, comprehension and imagination. Our reliance on the
internet may have made us a lazier or even dumbed us down, but for the profes-
268 Ailing Zhang

sional linguist language, writing, and the ability to decipher hidden meaning and
articulate emotions remains an art form. Industries will benefit from new technol-
ogies and innovations, but professional human translation will remain the heart
and soul of the language industry.
While the author is trying to wrap up this paper, AI-related events are being
reported almost every day, e.g. World Robots Conference 2017, the Atlantic

Monthly report on Facebook researchers at its Artificial Intelligence Research lab


describe using machine learning to train their “chat bots” to negotiate, and it
turns out bots are actually quite good at dealmaking, which means that Artificial
Intelligence has developed its own non-Human Language.
AI is evolving quickly. It can analyze data much better than humans. But
future predictions about AI mostly talk about the optimization of rational thought
processes. In terms of emotion, AI is still struggling. AI might be able to beat
people at rational tasks, AI might be of huge help for simultaneous interpreters
when and if the ergonomics works, but as long as simultaneous interpreters do
not produce mechanical word-for-word rendition like robots, AI is not likely to
replace them who have the adaptability, empathy and passion for what they do.

References
Byram, M. (1997). Teaching and assessing intercultural communicative competence. Clevedon:
Multilingual Matters.
Delisle, Jean. (1988) Translation: an interpretive approach. University of Ottawa Press.
Faes, Florian. (2017). Yale and Oxford Enter the Business of Predicting the End of the Human
Translator. Retrieved from [Link]
Frank Chen, “AI, Deep Learning, and Machine Learning: A Primer,” Andreessen Horowitz, June 10,
2016, [Link]
Garry Kasparov, “The Chess Master and the Computer,” New York Review of Books, February 11,
2010. [Link]
Hornik, K., Stinchcombe, M. B., & White, H. (1989). Multilayer feed forward networks are
universal approximators. Neural Networks, 2(5), 359–366.
Patel, Prachi. (2016). “Computer Vision Leader Fei-Fei Li on Why AI Needs Diversity”.
[Link]
Pedro Domingos (2015), The Master Algorithm: How the Quest for the Ultimate Learning Machine
Will Remake Our World (New York, New York: Basic Books.
Steven Levy, “How Google is Remaking Itself as a Machine Learning First Company,” Back-
channel, June 22, 2016, [Link]
machine-learning-first-company-ada63defcb70.
Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (3rd Edition) (Essex,
England: Pearson, 2009.
Warren S. McCulloch and Walter H. Pitts, “A Logical Calculus of the Ideas Immanent in Nervous
Activity.” Bulletin of Mathematical Biophysics, 5:115–133, 1943.

You might also like