Thursday 16 November

08:30 Registration and welcome coffee in Gallery and Marble Hall

Morning session

  Lecture Theatre Education Room
09:00 Opening Ceremony:
AsLing President’s Welcome Address and Sponsors’ Thought Leadership talks
 

 

09:45


Chair:  Ruslan Mitkov

Keynote Address

From Text to Concepts and Back: Going Multilingual with BabelNet in a Step or Two

In this talk I will introduce the most recent developments of the BabelNet technology, winner of several scientific awards and of great interest to interpreters and translators. I will first describe BabelNet live – the largest, continuously-updated multilingual encyclopedic dictionary – and then discuss a range of cutting-edge industrial use cases implemented by Babelscape, our Sapienza startup company, including: multilingual interpretation and mapping of terms; multilingual concept and entity extraction from text; semantically-enhanced multilingual translation, and cross-lingual text similarity.

Roberto Navigli (Sapienza Università di Roma)

Roberto Navigli is Professor of Computer Science at the Sapienza University of Rome, where he heads the multilingual Natural Language Processing group. He was awarded the Marco Somalvico 2013 AI*IA Prize for the best young researcher in AI. He is one of the few Europeans to have received two prestigious ERC grants in computer science, namely an ERC Starting Grant onmultilingual word sense disambiguation (2011-2016) and an ERC Consolidator Grant on multilingual language- and syntax-independent open-text unified representations (2017-2022).

He was also a co-PI of a Google Focused Research Award on NLP. In 2015 he received the META prize for groundbreaking work in overcoming language barriers withBabelNet, a project also highlighted in TIME magazine and presented in the most cited 2012 paper in the Artificial Intelligence Journal, a journal for which he is currently an Associate Editor.

Based on the success of BabelNet and its multilingual disambiguation technology, he co-founded Babelscape, a Sapienza startup company which enables HLT in hundreds of languages.

10:45  Health Break in Gallery and Marble Hall

During Health Break

10:55 – 11:10  Poster

On the Need for New Computer Aids for Translating Writers

Thanks to an experiment in French companies, we found out a new situation of multilingual writing, for which there seems to be no dedicated translation aids. Given the relative lack of competence on the target language (usually but not always English) we propose to build a new kind of CAT tools incorporating interactive (but asynchronous) disambiguation in source at some point during analysis and specific PE aids tuned to the user profiles.

The new system would be a DBMT (Dialogue-Based MT) system, that allows the user to edit the source and target texts, during the translation process, instead of translating and then post-editing the target text. The system would use two advantages of this new situation: (1) the expert knowledge of the “translating writers” on their domain, and (2) the possibility for them to change both texts, unlike a regular translator.

That approach would solve one of the main problems of MT, namely the poor quality of source texts usually given to the translators. Also, the MT results would be immediately assessed and if the translation is bad, immediately produced again after a change in the source text so that the source text be more machine translatable.

Claire Lemaire and Christian Boitet (Université de Grenoble Alpes)

Claire Lemaire is a translator who worked in the IT industry before studying computational linguistics. She just finished a PHD on the translation technologies practices of specialised translators and domain experts, in the ILCEA4 laboratory and is currently Visiting Researcher at LIG-GETAL laboratory.

Christian Boitet is emeritus professeor at Université Grenoble Alpes and continues his research on MT and CAT in the LIG-laboratory. He started in 1970 with the Professor Vauquois and succeeded him as director of the GETA study group on MT from 1985 to 2007. His new project aims at using ML (with DL) for developing UNL-based enconverters ans deconverters for as many under-resourced languages as possible.

11:15
Beyond Neural MT

A lot of hype and excitement has surrounded the latest advances in Neural Machine Translation (NMT) and generally with some justification: output is generally more fluent and closer to normal human output. Nevertheless some of the claims need to be qualified and practical implementation of NMT is not without difficulty: typically double the training material is required when compared to Statistical Machine Translation (SMT) and training a new engine can take weeks not days. Although NMT can produce much more fluent output than SMT it can have limited impact concerning real world localization tasks. Extensive tests have proven that in the end there is no great improvement in post-editing throughput and NMT is no panacea for omission or mistranslation. With NMT it is also impossible to ‘tune’ the output as can be done with SMT: you have no knowledge of how the NMT engine has made its decisions.

Most practical translation projects do not have anywhere near enough training data and do not have the luxury of waiting weeks for an engine to be trained even if there was enough data. If the training has been done on unrelated material or material that is not directly relevant to the customer’s terminology then misleading results can be produced: due to its improved fluency NMT can make identifying mistakes more difficult. NMT quality can also drop with the length of sentences and translating from a morphologically rich language to one that is morphologically impoverished can in fact produce worse results than SMT.

Andrzej Zydroń (XTM)

Andrzej Zydroń MBCS CITP

CTO @ XTM International, Andrzej Zydroń is one of the leading IT experts on Localization and related Open Standards. Zydroń sits/has sat on, the following Open Standard Technical Committees:

1. LISA OSCAR GMX
2. LISA OSCAR xml:tm
3. LISA OSCAR TBX
4. W3C ITS
5. OASIS XLIFF
6. OASIS Translation Web Services
7. OASIS DITA Translation
8. OASIS OAXAL
9. ETSI LIS
10. DITA Localization
11. Interoperability Now!
12. Linport

Zydroń has been responsible for the architecture of the essential word and character count GMX-V (Global Information Management Metrics eXchange) standard, as well as the revolutionary xml:tm (XML based text memory) standard which will change the way in which we view and use translation memory. Zydroń is also chair of the OASIS OAXAL (Open Architecture for XML Authoring and Localization) reference architecture technical committee which provides an automated environment for authoring and localization based on Open Standards.
Zydroń has worked in IT since 1976 and has been responsible for major successful projects at Xerox, SDL, Oxford University Press, Ford of Europe, DocZone and Lingo24 in the fields of document imaging, dictionary systems and localization. Zydroń is currently working on new advances in localization technology based on XML and linguistic methodology.
Highlights of his career include:
1. The design and architecture of the European Patent Office patent data capture system for Xerox Business Services.
2. Writing a system for the automated optimal typographical formatting of generically encoded tables (1989).
3. The design and architecture of the Xerox Language Services XTM translation memory system.
4. Writing the XML and SGML filters for SDL International’s SDLX Translation Suite.
5. Assisting the Oxford University Press, the British Council and Oxford University in work on the New Dictionary of the National Biography.
6. Design and architecture of Ford’s revolutionary CMS Localization system and workflow.
7. Technical Architect of XTM International’s revolutionary Cloud based CAT and translation workflow system: XTM.

Specific areas of specialization:
1. Advanced automated localization workflow
2. Author memory
3. Controlled authoring
4. Advanced Translation memory systems
5. Terminology extraction
6. Terminology Management
7. Translation Related Web Services
8. XML based systems
9. Web 2.0 Translation related technology

11:15 – 12:45 Gold Sponsor Workshop

Seamlessly Integrating Machine Translation into Existing Translation Processes

Having developed both STAR Transit (translation memory system) and STAR MT (machine translation platform), STAR has combined these into a single integrated solution. Transit creates training packages for the MT using existing TM and terminology. STAR routinely extracts text from any file format and leverages terminology to ensure that MT engines are optimally trained for the customer’s translations.
(1) In the translation process with Transit, files are imported, pretranslated with TM and any untranslated segments transferred to the MT engine. MT suggestions are then sent back. MT integration has been made as simple as possible for translators: During project exchange, MT suggestions are automatically packed into the project package. Translators therefore do not require access to the MT system as they receive the MT suggestions with the project.
(2) STAR MT Translate is a browser solution that gives staff access to the trained STAR MT engines. Sentences, paragraphs or documents are transferred to STAR MT, which then returns machine translations. The company-specific texts contain translation suggestions that use their corporate terminology and style, allowing customers to create an in-house, real-time translation solution that will be used solely by them and cannot be mined for data.

Moderated by Judith Klein (STAR)

Judith Klein (MA Information Science) has over 18 years’ experience in language technology. She joined STAR Germany in 1999 where she works as an expert in support, training and consulting for STAR’s language technology tools. Her most recent interest lies in STAR’s MT technology.
Before she came to STAR, she worked in the Language Technology department at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken.

11:45
Creating a Tool for Multimodal Translation and Post-editing on Touch-screen Devices

Only a few translation tools have been created with an ‘organic’ integration of TM and MT, i.e. tools that were designed to work equally well for post-editing MT and for handling TM matches. Still, these scarce options are based on the traditional input modes comprised of keyboard and mouse. Building on our experience in creating a prototype mobile post-editing interface for smartphones, we have created a translation editing environment that accepts additional input modes, such as touch commands (on devices equipped with touch screens, such as tablets and select laptops) and voice commands – using automatic speech recognition. Another important tool feature is the inclusion of accessibility principles from the outset, with the aim of opening translation editing to professionals with special needs. In particular, the tool is being designed to cater for blind translators. Our presentation will report on initial usability tests with an early version of the tool. The results include productivity measurements as well as data collected using satisfaction reports. Our ultimate goal is to test whether the tool can help alleviate some of the pain points of the edit-intensive, mechanical task of desktop post-editing.

Carlos Teixeira, Joss Moorkens (Dublin City University)

Carlos Teixeira is a post-doctoral researcher in the ADAPT Centre for Intelligent Digital Content Technology and a member of the Centre for Translation and Textual Studies (CTTS) at Dublin City University (DCU). He holds a PhD in Translation and Intercultural Studies and Bachelor degrees in Electrical Engineering and Linguistics. His research interests include Translation Technology, Translation Process Research, Translator-Computer Interaction, Localisation and Specialised Translation. He has vast experience in the use of eye tracking for assessing the usability of translation tools. His industry experience includes over 15 years working as a translator, localiser and language consultant.

Joss Moorkens is a lecturer in Translation Technology in the School of Applied Language and Intercultural Studies (SALIS) at DCU and a researcher in the ADAPT Centre and CTTS. Within ADAPT, he has contributed to the development of translation tools for both desktop and mobile. He is co-editor of a book on human and machine translation quality and evaluation (due in 2018) and has authored journal articles and book chapters on topics such as translation technology, post-editing of machine translation, human and automatic translation quality evaluation, and ethical issues in translation technology in relation to both machine learning and professional practice.

Daniel Turner is a research engineer in the ADAPT Centre’s Design & Innovation Lab (dLab). Within ADAPT, he has contributed to projects with a strong focus on rapid prototyping of user interfaces. He is proficient in full stack development with experience using a variety of languages and tools.

Joris Vreeke is Scrum Master and Senior Software Engineer in the ADAPT Centre’s dLab. He has a background in software development and design with a preference for graphics, UI/UX and web application development.

Andy Way is a professor in the School of Computing at DCU and leads ADAPT Centre’s Transforming Digital Content theme as well as the Localisation spoke, supervising projects with prominent industry partners. He has published over 350 peer-reviewed papers and successfully graduated numerous PhD and MSc students. His research interests include all areas of machine translation such as statistical MT, example-based MT, neural MT, rule-based MT, hybrid models of MT, MT evaluation and MT teaching.

Workshop continued
12:15
Three-dimensional quality model: The focal point of workflow management in organisational ergonomics

Although quality is a central concept in every act of translating, it has been considered difficult to define and therefore remained elusive. Generally, approaches to quality, both in translation studies and in translation industry, have concentrated on the product and/or process quality. Yet, in the present-day man and machine mediated, collaborative translation production networks, the challenge to define and manage quality comprehensively has become more acute than ever before.

This paper participates in the discussion on organizational ergonomics of translation by presenting a three-dimensional quality model. The model encompasses not only the so-far familiar product and process dimensions, but also a third dimension called social quality. Social quality, the focus of this paper, addresses the relations of the actors involved, both human and non-human, and their organizational interaction. The theoretical discussion on quality will be complemented by a recent case from Finland regarding the working conditions of the audio-visual translators of Star Wars: The Force Awakens and their impact on translation quality. By emphasizing the point that quality is a multidimensional concept which also includes social and ethical aspects, the paper argues for workflow management that caters to the needs of people, who are the bedrock of the industry.

Kristiina Abdallah (Universities of Vaasa and Jyväskylä)

Kristiina Abdallah has worked as a translator, subtitler and technical writer. Since 2001 she has held various positions at the University of Tampere, namely that of an assistant, a lecturer and researcher. As of 2010 she has worked as a university teacher, first at the University of Eastern Finland, and currently at the Universities of Vaasa and Jyväskylä. She defended her doctoral thesis entitled Translators in Production Networks. Reflections on Agency, Quality and Ethics in 2012. Her research interests are translation sociology and, more specifically, translators’ workplace studies.

Workshop continued
 12:45  Buffet Lunch in Gallery and Marble Hall

During Buffet Lunch

13:40 – 13:55 Poster

Towards a Hybrid Intralinguistic Subtitling Tool: MIRO Translate

Making audiovisual educational material accessible for non-native speakers and people who are deaf or hard-of-hearing is an ongoing challenge and the state-of-the-art in this field shows that no current software provides a fully automatic, high-quality solution. This article presents Miro Translate, a hybrid intralinguistic subtitling tool developed in response to this challenge by the MIRO Programme at the University of Perpignan Via Domitia. This cloud-based solution proposes synergies between machine and human subtitlers to simplify the technical and linguistic tasks of subtitlers. To this end, Miro Translate integrates the automatic speech recognition (ASR) technology provided by the Microsoft Translator Speech API, a deep neural network (DNN) system and TrueText technology to improve speech readability. Auto-generated captions constitute a solution to enhance productivity. Nevertheless, certain subtitling conventions must be considered to provide a readable and legible target output. A set of pre-editing and post-editing functionalities have been identified to assist the machine in the spotting task and to improve the quality of auto-generated captions. In conclusion, Miro Translate constitutes a cost-efficient solution to meet the increasing demand of high quality captions for video lecturers.

Laura Cacheiro Quintas (Université de Perpignan)

Laura Cacheiro Quintas is a PHD Student in Audiovisual Translation and new technologies. Her thesis is co-directed by the Université de Perpignan Via Domitia (France) and the Universitat Jaume I (Spain). Her research focuses on the integration of new technologies in the subtitling of video lectures with the purpose of assisting translators in their professional activity.
Currently working as a translator for the MIRO Programme at the University of Perpignan, where she tests the implementation of CAT tools and their adaptation to translation workflows, Laura also teaches Audiovisual Translation and Interpreting to second and third year undergraduate students at this university.

Afternoon session

Lecture Theatre Education Room
Chair: Juliet Macan


14:00
Evaluation of NMT and SMT Systems : A Study on Uses and Perceptions

Statistical and neural approaches have permitted fast improvement in the quality of machine translation, but we are yet to discover how those technologies can best “serve translators and end users of translations” (Kenny, 2017). To address human issues in machine translation, we propose an interdisciplinary approach linking Translation Studies, Natural Language Processing and Philosophy of Cognition. Our collaborative project is a first step in connecting sound knowledge of Machine Translation (MT) systems to a reflection on their implications for the translator. It focuses on the most recent Statistical MT (SMT) and Neural MT (NMT) systems, and their impact on the translator’s activity. BTEC-corpus machine translations, from in-house SMT and NMT systems, are subjected to a comparative quantitative analysis, based on BLEU, TER (Translation Edit Rate) and the modified version of METEOR from the LIG (Servan & al, 2016). Then, we qualitatively analyse translation errors from linguistic criteria (Vilar, 2006) or the MQM (Multidimensional Quality Metrics) using LIG tools, to determine for each MT systems, which syntactic patterns imply translation errors and which error type is mainly made. We finally assess translators’ interactions with the main error types in a short post-editing task, completed by 10 freelance translators and 20 trainees.

Emmanuelle Esperanca-Rodier (Université de Grenoble-Alpes)

Emmanuelle Esperança-Rodier is a lecturer at Univ. Grenoble Alpes (UGA), France, where she teaches English for Specific Purpose and a member of the Laboratoire d’Informatique de Grenoble (LIG). After defending a PhD in computational linguistics, on “Création d’un diagnostic générique de langues contrôlées, avec application particulière à l’anglais simplifié”, she worked as a post-editor in a translation agency. Back at University, she participated in IWSLT and WMT evaluation campaigns, as well as in several LIG projects. She now works on the evaluation of MT systems based on competences and focused on tasks, translation error analysis and multilinguism.

Co-authors

Prof. Laurent Besacier defended his PhD thesis (Univ. Avignon, France) in Computer Science in 1998 on “A parallel model for automatic speaker recognition”. Then he spent one and a half year at the Institute of Microengineering (EPFL, Neuchatel site, Switzerland) as an associate researcher working on multimodal person authentication (M2VTS European project). Since 1999 he is an associate professor (full professor since 2009) in Computer Science at Univ. Grenoble Alpes (he was formerly at U. Joseph Fourier). From September 2005 to October 2006, he was an invited scientist at IBM Watson Research Center (NY, USA) working on Speech to Speech Translation.
His research interests are mainly related to multilingual speech recognition and machine translation. Laurent Besacier has published 200 papers in conferences and journals related to speech and language processing. He supervised or co-supervised 20 PhDs and 30 Masters. He has been involved in several national and international projects as well as several evaluation campaigns. Since October 2012, Laurent Besacier is a junior member of the “Institut Universitaire de France” with a project entitled “From under-resourced languages processing to machine translation: an ecological approach”.

Caroline Rossi is a lecturer in the Applied Modern Languages department at Univ. Grenoble Alpes, where she teaches English and translation. She is a member of the Multilingual Research Group on Specialized Translation (GREMUTS) within ILCEA4 (Institut des Langues et Cultures d’Europe, Amérique, Afrique, Asie, Australie). Her current research focus is on integrating critical skills and understanding of both statistical and neural machine translation in translator training.

Since 2014, Alexandre Bérard has been a PhD Student at the University of Lille (with Prof. Laurent Besacier, and Prof. Olivier Pietquin). He worked with the SequeL team (specialized on Machine Learning) at Inria Lille, and then from 2016, with GETALP (specialized on NLP) at the University of Grenoble.
Specialized in Neural Machine Translation techniques, in particular for automatic post-editing and end-to-end speech translation, he obtained a Software Engineering degree from the INSA of Rennes in 2014, and a Master’s degree in data science from the University of Rennes.

Adapting a Computer Assisted Translation MA Course to New Trends

The present paper will present how we adapted our MA CAT tool class to two current trends. The first trend is the drastic rise of the number of students enrolled in the class. We report on the impact of the rise of students by presenting how challenging it has become to give them the assignment described in Starlander et Morado Vazquez (2013) and in Starlander (2015) The discussion is oriented towards how to teach this evaluation methodology in a different way, by adapting the content and most of all the teaching methods (crowdsourcing, online quiz). We describe in detail how these new activities fit into the entire course content. The second trend is the integration of MT into CAT tools. How can we best introduce this evolution in our teaching? We present the main results of a preliminary experience of integrating a translation exercise involving the use of MT. The final discussion is dedicated to more general teaching challenges implied by the ever moving trends in teaching translation technology.

Marianne Starlander (Université de Genève)

Dr. Marianne Starlander is a CAT tool specialist and lecturer at the Faculty of Translation and Interpreting of the University of Geneva. She joinded the multilingual information processing department in 2000 where she worked as a teaching and research assistant and now as teaching staff. She originally trained as a translator at the same faculty and also holds a post-graduate degree in European studies from the European Institute of the Unversity of Geneva (2000).
She was coordinator for SUISSETRA (Swiss association for the promotion of CAT tools) from 2008-2012. She has been involved in the research project MedSLT (Medical spoken language translator) and worked on spoken language translation evaluation issues in the frame of her thesis published in 2016. She is responsible for the CAT tools training at the MA level and involved in the continuous education program in CAT tools.

14:30
WIPO Pearl - The Terminology Portal of the World Intellectual Property Organization

In this paper we shall present WIPO Pearl, the multilingual terminology portal of the World Intellectual Property Organization, a specialized agency of the United Nations. The nature of the linguistic dataset made available in WIPO Pearl will be described and we shall show how multilingual knowledge representation is achieved and graphically displayed. Secondly, we shall demonstrate how such data is exploited to facilitate search of prior art for patent filing or examination purposes, by leveraging the validated linguistic content as well as the validated conceptual relations that are presented in “concept maps”. We shall discuss how, in addition to humanly validated concept maps, “concept clouds” are generated by means of machine learning algorithms which automatically cluster concepts in the database by exploiting textual data embedded in the terminology repository. And finally, we shall present opportunities for collaborations with WIPO in the field of terminology.

Geoffrey Westgate (World Intellectual Property Organization)

Geoffrey Westgate is Head of the Support Section, PCT Translation Division, at WIPO in Geneva, Switzerland. After obtaining a DPhil in 1999 from the University of Oxford, UK, where he also taught German language and literature, he worked initially as a translator and then a reviser in WIPO’s patent translation department. Since 2009 he has headed the Division’s Support Section, with responsibility for computer-assisted translation tools, translation project management, and terminology management, including WIPO’s online terminology portal, WIPO Pearl.

 14:30 – 15:30 Workshop

The Localization Industry Word Count Standard: GMX-V

Word and character counts are the basis of virtually all metrics relating to costs in the L10N Industry. An enduring problem with these metrics has been the lack of consistency between various computer assisted tools (CAT) and translation management systems (TMS). Notwithstanding these inconsistencies there are also issues with common word counts generated by word processing systems such as Microsoft Word. Not only do different CAT and TMS systems generate differing word and character counts, but there is also a complete lack of transparency as to how these counts are arrived at: specifications aren’t published and systems can produce quite widely different metrics. To add clarity, consistency and transparency to the issue of word and character counts the Global Information Management Metrics Volume (GMX-V) standard was created. Starting with version 1.0 and then as version 2.0 GMX-V addresses the problem of counting words and characters in a localization task, and how to exchange such data electronically. This workshop goes through the details of how to identify and count words and characters using a standard canonical form, including documents in Chinese, Japanese and Thai, as well as how to exchange such data between systems.

Moderated by Andrzej Zydroń (XTM)

Andrzej Zydroń MBCS CITP

CTO @ XTM International, Andrzej Zydroń is one of the leading IT experts on Localization and related Open Standards. Zydroń sits/has sat on, the following Open Standard Technical Committees:

1. LISA OSCAR GMX
2. LISA OSCAR xml:tm
3. LISA OSCAR TBX
4. W3C ITS
5. OASIS XLIFF
6. OASIS Translation Web Services
7. OASIS DITA Translation
8. OASIS OAXAL
9. ETSI LIS
10. DITA Localization
11. Interoperability Now!
12. Linport

Zydroń has been responsible for the architecture of the essential word and character count GMX-V (Global Information Management Metrics eXchange) standard, as well as the revolutionary xml:tm (XML based text memory) standard which will change the way in which we view and use translation memory. Zydroń is also chair of the OASIS OAXAL (Open Architecture for XML Authoring and Localization) reference architecture technical committee which provides an automated environment for authoring and localization based on Open Standards.
Zydroń has worked in IT since 1976 and has been responsible for major successful projects at Xerox, SDL, Oxford University Press, Ford of Europe, DocZone and Lingo24 in the fields of document imaging, dictionary systems and localization. Zydroń is currently working on new advances in localization technology based on XML and linguistic methodology.
Highlights of his career include:
1. The design and architecture of the European Patent Office patent data capture system for Xerox Business Services.
2. Writing a system for the automated optimal typographical formatting of generically encoded tables (1989).
3. The design and architecture of the Xerox Language Services XTM translation memory system.
4. Writing the XML and SGML filters for SDL International’s SDLX Translation Suite.
5. Assisting the Oxford University Press, the British Council and Oxford University in work on the New Dictionary of the National Biography.
6. Design and architecture of Ford’s revolutionary CMS Localization system and workflow.
7. Technical Architect of XTM International’s revolutionary Cloud based CAT and translation workflow system: XTM.

Specific areas of specialization:
1. Advanced automated localization workflow
2. Author memory
3. Controlled authoring
4. Advanced Translation memory systems
5. Terminology extraction
6. Terminology Management
7. Translation Related Web Services
8. XML based systems
9. Web 2.0 Translation related technology

15:00
eLUNa – The Web-based Family of Language Tools of the United Nations

This presentation introduces the eLUNa family of language tools developed by the United Nations: a web-based computer-assisted tool, an editorial interface and a search engine, all specially designed for UN language professionals. The presentation will also cover recent and future developments in eLUNa, as well as a short update on the projects to produce machine-readable documents and to share eLUna with other organizations.)

Natalia Bondonno (United Nations - New York)

Natalia Bondonno has been a United Nations staff member at the Department for General Assembly and Conference Management in New York since 2014. She is the Project Manager for machine-readable documents and for the UNTERM portal under the gText Project, which offers a suite of language applications, including eLUNa, an in-house developed CAT tool designed for UN language professionals.

Ms. Bondonno has a degree in Legal Translation from the University of Buenos Aires, a masters in translation from the University of Alicante and a masters in International Law from Fundación Ortega y Gasset. Before joining the UN, she worked as a project manager and financial translator, and was a staff interpreter in NY Civil Court for four years.

15:30  Health Break in Gallery and Marble Hall

During Health Break

15:40 – 15:55 Poster

Dimitra Kalantzi (Translation Pozitron Ltd)

Dimitra Kalantzi is a professional English to Greek translator currently based in Athens. Over the past 13 years, Dimitra has worked for companies and translation agencies both in the UK and Greece, as well as for the European Parliament in Luxembourg (as a trainee translator). She has an MSc in Machine Translation from UMIST, UK and a PhD in Informatics (subtitling and linguistics) from the University of Manchester and is a member of the Institute of Translation and Interpreting (ITI) and the European Association for Machine Translation (EAMT).

MT and Post-editing from a Translator’s Perspective

There is no doubt that MT is nowadays one of the major trends in the translation industry. Indeed, more and more translation agencies offer MT and post-editing services to their clients, and professional translators are more and more likely to be offered post-editing tasks in their everyday work. In this context, and drawing from my own experience with MT and post-editing as a translator, I will discuss some common myths around MT and post-editing, will suggest some additional services that both translation agencies and freelance translators can offer in relation to MT, and will also put forward some reservations and ideas regarding MT evaluation within the translation industry. This paper will also make a plea to universities and academics involved in the teaching of MT courses and modules to also cater to the needs of practicing translators.

16:00
The Human and the Machine: Perspectives to 2045 and Beyond

As Chair of the Institute of Translation and Interpreting (ITI) and Senior Lecturer in Translation Studies at the University of Portsmouth (UoP), I am constantly forced to consider the messages I give out to UoP students and ITI members aiming to work as the translators and interpreters of the future.

The machines will inevitably move on and it is likely that the bulk of human translators and interpreters will have to move on with them, so, ideally, the professional associations and training centres should prepare their members and students for mutually beneficial symbiotic relationships with the machines, helping them adapt to the possibilities of new modes and models of work.

But just what might these be?

Starting with Ray Kurzweil (who pencilled in ‘the singularity’ for 2045), I will take a trip through content from experts including Nicholas Carr, Spence Green and Dorothy Kenny to create some semblance of what the sector, skills profiles and working patterns might look like for the humans working with the machines, outlining some potential niches for humans in the sector of the future.

Sarah Griffin-Mason (ITI Chairperson & University of Portsmouth)

I am currently Chair of the Institute of Translation and Interpreting (ITI) and Senior Lecturer in Translation Studies at the University of Portsmouth. I am an experienced freelance translator, editor and educator teaching translation at MA and UG levels on a half-time contract while running a freelance translation and editing business and in all my roles I express a deep commitment to improving translator training.
I have taught Specialised Translation at the University of Portsmouth, Bristol University and London Metropolitan since 2005 and, more recently, the Professional Aspects of Translation unit that has run at Portsmouth for the past two years
I trained as an in-house translator with the InterPress Service in Montevideo in the 1990s before graduating with a distinction in the MA in Translation Studies at Portsmouth in 2005.
I worked for many years as a translator for UNICEF The Americas and Caribbean Regional Office, for the scientific publishers Elsevier on the bilingual medical journal Actas Dermosifiliográficas and for various other private clients on a variety of projects (see my website www.griffin-mason.com for more details).

Building a Custom Machine Translation Engine as Part of a Postgraduate University Course: A Case Study

In 2015, I was asked to design a postgraduate course on machine translation (MT) and post editing. Following a preliminary theoretical part, the module concentrated on the building and practical use of custom machine translation (CMT) engines. This was a particularly ambitious proposition since it was not certain that students with degrees in languages, translation and interpreting, without particular knowledge of computer science or computational linguistics, would succeed in assembling the necessary corpora and build a CMT engine. This paper looks at how the task was successfully achieved using KantanMT to build the CMT engines and Wordfast Anywhere to convert and align the training data. The course was clearly a success since all students were able to train a working CMT engine and assess its output. The majority agreed their raw CMT engine output was better than Google Translate’s for the kinds of text it was trained for, and better than the raw output (pre-translation) from a translation memory tool. There was some initial scepticism among the students regarding the effective usefulness of MT, but the mood clearly changed at the end of the course with virtually all students agreeing that post-edited MT has a legitimate role to play.

Michael Farrell (Traduzioni Inglese)

Michael Farrell is an untenured lecturer in computer tools for translators and interpreters at the International University of Languages and Media (IULM), Milan, Italy, the developer of the terminology search tool IntelliWebSearch, a qualified member of the Italian Association of Translators and Interpreters (AITI), and member of the Mediterranean Editors and Translators association.

Besides this, he is also a freelance translator and transcreator. Over the years, he has acquired experience in the cultural tourism field and in transcreating advertising copy and press releases, chiefly for the promotion of technology products. Being a keen amateur cook, he also translates texts on Italian cuisine.

 

16:30

Round Table

The Translator and the Machine, Today and Tomorrow

A unique opportunity to share your experience and expectations on translators’ productivity and income, ergonomics and usability of translation tools, professional development and continuing education.

The TC39 participants who take part in this session will set the agenda.

Workshop

Does your Tool Support XLIFF 2?
Moderated by David Filip (Trinity College Dublin)

David Filip is Chair (Convener) of OASIS XLIFF OMOS TC; Secretary, Editor and Liaison Officer of OASIS XLIFF TC; a former Co-Chair and Editor for the W3C ITS 2.0 Recommendation; Advisory Editorial Board member for the Multilingual magazine; Co-Chair of the Standards Interest Group at JIAMCATT. His specialties include open standards and process metadata, workflow and meta-workflow automation. David works as a Moravia Fellow at the ADAPT Research Centre, Trinity College Dublin, Ireland. Before 2011, he oversaw key research and change projects for Moravia’s worldwide operations. David held research scholarships at universities in Vienna, Hamburg and Geneva, and graduated in 2004 from Brno University with a PhD in Analytic Philosophy. David also holds master’s degrees in Philosophy, Art History, Theory of Art and German Philology.

 17:30  End of day 1
Friday detailed programme