Itzulpen automatiko gainbegiratu gabea

  1. ARTETXE ZURUTUZA, MIKEL
Dirigée par:
  1. Gorka Labaka Intxauspe Directeur/trice
  2. Eneko Agirre Bengoa Directeur/trice

Université de défendre: Universidad del País Vasco - Euskal Herriko Unibertsitatea

Fecha de defensa: 29 juillet 2020

Jury:
  1. Kepa Sarasola Gabiola President
  2. Pablo Gamallo Otero Secrétaire
  3. Cristina España Bonet Rapporteur
Département:
  1. Lenguajes y Sistemas Informáticos

Type: Thèses

Teseo: 152737 DIALNET lock_openADDI editor

Résumé

Modern machine translation relies on strong supervision in the form of parallel corpora. Such arequirement greatly departs from the way in which humans acquire language, and poses a major practicalproblem for low-resource language pairs. In this thesis, we develop a new paradigm that removes thedependency on parallel data altogether, relying on nothing but monolingual corpora to train unsupervisedmachine translation systems. For that purpose, our approach first aligns separately trained wordrepresentations in different languages based on their structural similarity, and uses them to initializeeither a neural or a statistical machine translation system, which is further trained through iterative backtranslation.While previous attempts at learning machine translation systems from monolingual corporahad strong limitations, our work¿along with other contemporaneous developments¿is the first to reportpositive results in standard, large-scale settings, establishing the foundations of unsupervised machinetranslation and opening exciting opportunities for future research. // Modern machine translation relies on strong supervision in the form of parallel corpora. Such arequirement greatly departs from the way in which humans acquire language, and poses a major practicalproblem for low-resource language pairs. In this thesis, we develop a new paradigm that removes thedependency on parallel data altogether, relying on nothing but monolingual corpora to train unsupervisedmachine translation systems. For that purpose, our approach first aligns separately trained wordrepresentations in different languages based on their structural similarity, and uses them to initializeeither a neural or a statistical machine translation system, which is further trained through iterative backtranslation.While previous attempts at learning machine translation systems from monolingual corporahad strong limitations, our work¿along with other contemporaneous developments¿is the first to reportpositive results in standard, large-scale settings, establishing the foundations of unsupervised machinetranslation and opening exciting opportunities for future research.