Workshops

(alphabetical order of moderator)

Jerzy Czopik

 

Jerzy Czopik was born and grew up in Cracow, where he started to study mechanical engineering.In 1986 he moved with his wife to Dortmund/Germany. Here he finished the mechanical engineering study and started a translators and interpreters career in 1990.

Jerzy is approved trainer for SDL Trados Studio and MultiTerm.Together with his wife he is certified by LICS according to EN 15038. He also acts as LICS auditor for this standard.

In 2011 he published a manual on SDL Trados Studio in Polish. He is also very active on various lists and forums, helping with SDL Studio problems.

The session is directed at project managers of smaller translation companies and freelance translators, who want to improve their translations.

Intermediate and advanced users are expected.
The first part of the session is a PowerPoint presentation on quality in translation.  First quality will be defined, and the means to achieve quality will be outlined.  Then possible quality problems and their causes will be shown.
The second part will show how to use SDL Trados Studio, MemoQ and xBench to control the output quality of translated material and will focus on how to achieve best results with the least effort.


Quality does not start, when the translation is finished. To deliver a high quality product a well designed process is necessary. In the best case it starts already before the translation is assigned to a translator.
The session will start with defining quality and looking at the measures to achieve it.  After having done that, we have a good starting point to talk about checking quality.

Quality of a translation cannot be achieved by using tools like CAT or QA-tools.  These tools can only provide some help, but cannot replace the human.  Nevertheless good quality can be improved, if the tools are used properly.  But only then – improper use will cause a lot of misunderstandings and problems.

We shall thus talk about quality checking, focused on the target language.  Tools like SDL Trados Studio, MemoQ or Xbench allow you to configure the QA-checking modules, but in quite different ways.  Here not only the knowledge of the tool, but also some understanding of the target language is necessary.  Best case QA-checking should be done by people understanding both source and target language.  Unfortunately very often this process is done by project managers, who typically cannot have command of as many languages as the languages of the projects they manage.

During the session I would like to show why understanding target is also necessary when doing QA-checking.


Joanna Drugan

 


Dr Joanna Drugan is Senior Lecturer in Applied Translation Studies at the University of East Anglia, UK.
Her main research interests include translation quality, translation ethics and translation technologies.
Her most recent book is Quality in Professional Translation (Bloomsbury, 2013).

She is currently researching real-world ethical challenges when professional translators and interpreters are not available, particularly in healthcare and social work, and ways in which training and technology might support professionals and service users faced with such challenges.

Jo holds an MA (Hons) and PhD in French from the University of Glasgow, Scotland.She previously worked at Reading University and Leeds University, where she was a founder member of the Centre for Translation Studies and ran the MA Applied Translation Studies for over a decade.She was awarded a National Teaching Fellowship and became a member of the Higher Education Academy in 2008.

She has served as a member of the Peer Review Council for the Arts and Humanities Research Council since 2012 and was selected as a founding member of the Publication Integrity and Ethics Council in 2013.Since joining UEA in 2012, Jo has led specialist Masters modules in translation technologies, translation as a profession, and research methods, and an undergraduate module on translation and globalisation.

She is Director of Graduate Studies for the School.


Top-down or bottom-up: what do industry approaches to translation quality mean for effective integration of standards and tools?

The diverse approaches to translation quality in the industry can be grouped in two broad camps: top-down and bottom-up. A recent study of the language services (‘Quality in Professional Translation’, Bloomsbury, 2013) identified different models within these two camps (e.g. ‘minimalist’ or ‘experience-dependent’), and drew out distinctive features for each. These different approaches have significant implications for, first, the integration of industry standards on quality, and, second, the efficient harnessing of technology throughout the translation workflow.
In this worksho I will explains the range of industry approaches to translation quality, then ask how these map on to successful integration of standards, and features of the leading tools which are designed to support or enhance quality.
Are standards and technologies inevitably experienced as an imposition by translators and others involved in the translation process?
Do any industry approaches suggest painless ways these developments can be channelled to improve quality, or to maintain it while meeting client demands for tighter deadlines?
What lessons can we learn from the enthusiasts?


The diverse approaches to translation quality in the industry can be grouped in two broad camps: top-down and bottom-up. The author has recently published a decade-long study of the language services (Quality in Professional Translation, Bloomsbury, 2013). Research for the study covered translation providers from individual freelance translators working at home, to large-scale institutions including the European Union Directorate-General for Translation, commercial translation companies and divisions, and not-for-profit translation groups.

Within the two broad ‘top-down’ and ‘bottom-up’ camps, a range of further sub-models was identified and catalogued (e.g. ‘minimalist’ or ‘experience-dependent’). The shared distinctive features of each sub-group were described, with a particular focus on their use of technologies.

These different approaches have significant implications for, first, the integration of industry standards on quality, and, second, the efficient harnessing of technology throughout the translation workflow.

This contribution explains the range of industry approaches to translation quality then asks how these map on to successful integration of standards, and features of the leading tools which are designed to support or enhance quality.
Are standards and technologies inevitably experienced as an imposition by translators and others involved in the translation process? Significantly, no straightforward link was found between a ‘top-down’ or ‘bottom-up’ approach to assessing or improving translation quality and effective use of tools or standards. Instead, positive practice was identified across a range of approaches.

The discussion outlines some painless ways these developments are being channelled to improve quality, or more frequently, to maintain it while meeting tighter deadlines. Some models existed beyond, or were partially integrated in, ‘professional’ translation (e.g. pro bono translators, and volunteer Open Source localizers).

What lessons can we learn from enthusiasts in such communities, who sometimes adopt or create approaches voluntarily?


Michael Farrell

 


Michael Farrell is primarily a freelance technical translator, but is also an untenured lecturer in computer tools for translation and interpreting at the IULM University (Milan, Italy).

He is an Atril Certified Training Partner and the author of  “A Tinkerer’s Guide to Structured Query Language in Déjà Vu X”.He is also the developer of the freeware terminology search tool IntelliWebSearch and a qualified member of the Italian Association of Translators and Interpreters (AITI).


Solving Terminology Problems More Quickly with “IntelliWebSearch (Almost) Unlimited”

In 2005, the speaker received several descriptions of university courses to translate, which boiled down to a list of topics and laws of mathematics and physics: not many complex sentences, but a great deal of terminology which needed translating with the utmost care. He found himself repeatedly copying terms to his PC clipboard, opening his browser, opening the most appropriate websites, pasting terms into search boxes, setting search parameters, clicking search buttons, analysing results, copying the best solutions back to the clipboard, returning to his translation environment and pasting the translated terms into the text. He quickly realized he needed to find a way to semi-automate the process and started looking for a tool, but found nothing similar to what he needed.

He therefore set about writing a macro, which gradually grew until it became a fully fledged software tool.
A new version is currently being developed under the code name “IntelliWebSearch (Almost) Unlimited” (pre-alpha at the time of writing).

During the workshop the speaker will reveal some of its features for the first time in public.
The workshop is aimed at translators, interpreters, lexicographers and terminologists in all fields.


Michael Farrell received several descriptions of university courses to translate from Italian into English in early 2005. The curricula boiled down to a list of topics and laws of mathematics and physics: not many complex sentences, but a great deal of terminology which needed translating and double checking with the utmost care and attention.To do this, he found himself repeatedly copying terms to his PC clipboard, opening his browser, opening the most appropriate on-line resources, pasting terms into search boxes, setting search parameters, clicking search buttons, analysing results, copying the best solutions back to the clipboard, returning to the translation environment and pasting the terms found into the text.He quickly realized that he needed to find a way to semi-automate the terminology search process in order to complete the translation in a reasonable time and for his own sanity. He immediately started looking around for a tool, but surprisingly there seemed to be nothing similar to what he needed on the market. Having already created some simple macros with a free scripting language called AutoHotkey, he set about writing something that would do the trick.

The first simple macro he knocked out gradually grew and developed until it became a fully fledged software tool: IntelliWebSearch. After speaking to several colleagues about it, he was persuaded to share his work and put together a small group of volunteer beta- testers. After a few weeks of testing on various Windows systems, he released the tool as freeware towards the end of 2005.

At the beginning of his workshop, Michael Farrell will explain what prompted him to create the tool and how he went about it.  He will then go on to describe its use and its limitations, and show how it can save translators and terminologists a lot of time with a live demonstration, connectivity permitting.

The workshop will conclude with a presentation revealing for the first time in public some of the features of a new version which is currently being developed under the code name “IntelliWebSearch (Almost) Unlimited” (pre-alpha at the time of writing).
The workshop is aimed at professional translators, interpreters and terminologists in all fields, especially those interested in increasing efficiency through the use of technology without lowering quality standards.


Attila Görög

 


Attila Görög has been involved in various national and international projects on language technology in the past 10 years.He has a solid background in Quality Evaluation, Post-Editing and Terminology Management.

Attila is interested in globalization issues and projects involving CAT tools.

His webinars and workshops discuss hot topics in the translation industry with aim of making participants future proof.

As a product manager at TAUS, he is responsible for the TAUS Evaluation platform also referred to as the Dynamic Quality Framework or DQF.


Quality Evaluation today: the Dynamic Quality Framework

We will give an overview of existing best practices in translation quality evaluation (QE). A dynamic approach will be introduced that is standardized but flexible enough to bring common sense to translation QE.

We call this approach the Dynamic Quality Framework (DQF).

Standard reporting in DQF opens the way to benchmarking translation quality on an industry scale, bringing more business intelligence to the translation industry and helping the industry grow and innovate faster.

This workshop is relevant for anyone interested in translation quality. Information on best practices in QE will be provided as well as a practical introduction to DQF. We will discuss common problems related to the topic and compare evaluation methods and metrics.


Quality is when the buyer or customer is satisfied. Yet, quality measurement in the translation industry is not always linked to customer satisfaction but rather managed by quality gatekeepers on the supply and demand side who have specific evaluation models, the majority of which are based on counting errors, applying penalties and maintaining thresholds with little, if any, interaction from customers or ‘real users’.

This makes assessing translation quality the single biggest challenge in the translation industry today.
The increased usage of translation technology and the emergence of new dynamic content complicate the translation quality challenge even further. Buyers and providers of translation services need to be able to go up and down in quality. They need to deliver a dynamic service: a translation quality that matches the purpose of the communication.

In this paper, we will give an overview of existing best practices in translation QE. A dynamic approach will be introduced that is standardized but flexible enough to bring common sense to translation quality evaluation. We call this approach the Dynamic Quality Framework (DQF).

Standard reporting in DQF opens the way to benchmarking translation quality on an industry scale, bringing more business intelligence to the translation industry and helping the industry grow and innovate faster.
1. We summarize best practices for reducing quality problems earlier in the content production cycle.
2. We review some of the main methods for quality evaluation in domains related to translation, i.e. machine translation, translator training, community translation, and (monolingual) technical communication.
3. We demonstrate that the concepts of utility, time and sentiment play an important role in quality evaluation in these areas and we propose eight QE models.
4. We introduce DQF that takes into account the communication channel and the content type of translations and offers dynamic ways of QE. It is informed by the results from the content profiling exercise performed by some of the companies collaborating in the DQF project showing that it is possible to map content profiles to the evaluation parameters of utility, time and sentiment.
5. We will close with a short description of the DQF tools.

This workshop is relevant for anyone interested in translation quality. Information on best practices in QE will be provided as well as an introduction to the Dynamic Quality Framework. We will elaborate on common problems related to the topic and compare evaluation methods and metrics.


Koen Kerremans

 

Koen Kerremans obtained his Master’s degree in Germanic Philology (Dutch-English) at Universiteit Antwerpen in 2001, his Master’s degree in Language Sciences – with a major in computational linguistics – at Universiteit Gent in 2002 and his PhD degree in Applied Linguistics at Vrije Universiteit Brussel in 2014 (Title of his dissertation: ‘Terminological variation in multilingual Europe. The case of English environmental terminology translated into Dutch and French’).

His research interests pertain to applied linguistics, language technologies, ontologies, terminology (variation) and translation studies.

He currently holds a position as post-doctoral researcher and teaching assistant at the department of Applied Linguistics (Faculty of Arts and Philosophy) of Vrije Universiteit Brussel (VUB) where he teaches applied linguistics, terminology and several Dutch language courses.

He is a member of VUB’s research group ‘Centrum voor Vaktaal en Communicatie’ (Centre for Special Language Studies and Communication).


Representing intra- and interlingual terminological variation in a new type of translation resource: a prototype proposal

In ontologically-underpinned terminological knowledge bases or TKBs, terminological data tend to be represented in networks comprised of conceptual and semantic relations. As opposed to traditional ways of representing terminological data (e.g. on the basis of alphabetically sorted lists, tables or matrices), such networks allow for a flexible and dynamic visualisation of data that may be connected to one another in several ways.

The aim of this article is to reflect on how visualisations of terms, intralingual variants and their translations in networks can be improved by taking into account the contextual constraints of the texts in which they appear. To this end, a novel type of translation resource has been developed, resulting from a semi-automatic method for identifying intralingual variants and their translations in texts.

A prototype visualisation of this resource will be presented in which terms, variants and their translations appear as a contextually-conditioned network of ‘language options’. The proposed model derives from the Hallidayan premise that each language option or choice acquires its meaning against the background of other choices which could have been made.

The choices are perceived as functional: i.e. they can be motivated against the backdrop of a complex set of contextual conditions.


In this study, terminological variation pertains to the different ways in which specialised knowledge is expressed in written discourse by means of terminological designations. Choices regarding the use of term variants in source texts (i.e. intralingual variation) as well as the different translations of these variants in target texts (i.e. interlingual variation) are determined by a complex interplay of contextual factors of several kinds. For translators, it is therefore important to know the different language options (i.e. variants) that are available when translating terms and to know in which situational contexts certain options are more likely to be used.

To this end, translators often consult bi- or multilingual translation resources (e.g. terminological databases) to find solutions to certain translation problems. Different possibilities are offered in terminological databases to represent and visualise intra- and interlingual variants. In conventional terminology bases, terms in several languages usually appear on concept-oriented term records. This particular way of structuring and visualising terminological data has its roots in prescriptive terminology in which terms are merely viewed as ‘labels’ assigned to clearly delineated concepts (Picht and Draskau 1985). In ontologically-underpinned terminological knowledge bases or TKBs, terminological data tend to be represented in networks comprised of conceptual and semantic relations (Kerremans et al. 2008; Faber 2011; Durán Muñoz 2012; Peruzzo 2013). As opposed to traditional ways of representing terminological data (e.g. on the basis of alphabetically sorted lists, tables or matrices), such networks allow for a flexible and dynamic visualisation of data that may be connected to one another in several ways.

The aim of this article is to reflect on how visualisations of terms, variants and their translations in networks can be improved by taking into account the contextual constraints of the texts in which they appear. To this end, a novel type of translation resource has been developed, resulting from a semi-automatic method for identifying intralingual variants and their translations in texts.

A prototype visualisation of this resource will be presented in which terms, variants and their translations appear as a contextually-conditioned network of ‘language options’. The proposed model derives from the Hallidayan premise that each language option or choice acquires its meaning against the background of other choices which could have been made. The choices are perceived as functional: i.e. they can be motivated against the backdrop of a complex set of contextual conditions (Eggins 2004). Changing these contextual conditions causes direct changes in the network of terminological options that are shown to the user.
——————-
List of references

Durán Muñoz, Isabel. 2012. La ontoterminografía aplicada a la traducción. Vol. 80. Studien Zur Romanischen Sprachwissenschaft Und Interkulturellen Kommunikation. Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien: Peter Lang.
Eggins, Suzanne. 2004. Introduction to Systemic Functional Linguistics: 2nd Edition. London/New York: Continuum International Publishing Group.
Faber, P. 2011. The dynamics of specialized knowledge representation: Simulational reconstruction or the perception–action interface. Terminology 17: 9–29.
Kerremans, Koen, Rita Temmerman, and Peter De Baer. 2008. Construing domain knowledge via terminological understanding. Linguistica Antverpiensia 7: 177–191.
Peruzzo, Katia. 2013. Terminological equivalence and variation in the EU multi-level jurisdiction: a case study on victims of crime. PhD Thesis, Trieste: Università Degli Studi Di Trieste.
Picht, Heribert, and Jennifer Draskau. 1985. Terminology: an introduction. Surrey: University of Surrey, Department of Linguistic and International Studies.


Jessica Xiangyu Liu

 

Jessica Xiangyu Liu is a research postgraduate at Department of Translation, The Chinese University of Hong Kong, Hong Kong.
She is in her first year of the MPhil programme in Translation.

She has a strong interest in the teaching computer-assisted translation.systems, and the hybrid of machine translation and computer-assisted translation.

Before conducting her study, she worked as a research assistant at Centre for Translation Technology, CUHK (2010-2013) where she was engaged in the research and training of computer-assisted translation software.


Teaching the use of computer-aided translation systems in a progressive manner

The teaching of computer-aided translation is commonplace in academic institutions in recent years. More research has been done and more works have been published in this area. While much has been written on the theoretical and conceptual aspects of computer-aided translation and the contents of the course, little has been done on its practical aspect.

This workshop will present the classroom practice modules of the Introduction of Computer-aided Translation, an MA course at the Department of Translation of The Chinese University of Hong Kong. The author will discuss how to teach the use of computer-aided translation systems in a progressive manner through demonstrations and classroom practice, and from basic functions to advanced operations.

This workshop will also present some pedagogical reflections on the teaching of computer-aided translation systems.

It is hoped that it will lead to a rethinking of the way of computer-aided translation systems should be taught.
What it intends to propose is to bring the learning of translation technology closer to the real world through systematic training, thus responding to the changing professional requirements that translators face in their workplace.

Workshops organized by Sponsors

Kilgray – Silver Sponsor “memoQ 2014 presentation” Lone Beheshty
MateCat – Gold Sponsor “Free. A new business model for CAT tools.” Alessandro Cattelan, Translated.net
SDL – Gold Sponsor – “Extending your use of SDL Trados Studio 2014 – hints and tips to improve your productivity” Lydia Simplicio – SDL Business Consultant