topic 2: how to pump up your mt quality (4)
TRANSCRIPT
Topics
Background
The need for Language Quality Review
Introducing Industry Standards
MQM – QTLaunchPad Project
DFKI, CNGL, University Sheffield, Athena, (GALA, FIT)
Automating the Process of Quality Evaluation
KantanLQR
A Cloud-based Platform that engages with Professional Translators in the development of KantanMT engines
Background
Background
One of the biggest challenges in deploying Custom Machine Translation is measuring translation quality
How can you develop a formalised mechanism to determine translation quality?
More importantly, how can you formalise the measurement of translation quality that delivers usable metrics that drive a deeper understanding of how your engine will perform in production?
Using Industry Standards
Multidimensional Quality Metrics (MQM) European Commission-
funded
DFKI, CNGL DCU, University Sheffield, Athena (Inputs from GALA, FIT)
Focus Customised quality
metrics for human and machine translation quality evaluation
KantanLQR
Language Quality Review Platform Easy to manage workflow
Notification, tracking, scoring
Customised KPIs Multiple Error Typologies
Compulsory, Optional KPIs
Customised Projects Define Error Typology, distribution
lists, project duration
Data visualisation Real-time, multiple views,
downloadable
Timeline
Kantan BuildAnalytics: Automated Scores
(BLUE, TER, F-Measure)
LQR Results: Review results, determine threshold scores
Achieved Scores: If scores meet client expectations,
engine is production ready!
DepoyStart Q3 2014
KantanLQR: Engage with Professional Translators to evaluate Language Quality
KantanMT.com: Data collection, cleansing,
manufacturing
KantanMT.com: Retrain engine with results from
KantanLQT cycle
LQR