Skip to content
This repository has been archived by the owner on May 4, 2021. It is now read-only.

Latest commit

 

History

History
27 lines (14 loc) · 1.43 KB

README.md

File metadata and controls

27 lines (14 loc) · 1.43 KB

DataCollection

Collecting data for machine translation training from CommonCrawl is a two-phase process illustrated in the following diagram:

CommonCrawl process diagram

Installation

Hardware requirements and installation instructions can be found here.

Phase 1: Language annotation, building a meta-data file and monolingual data extraction

The first phase detects the languages of the web pages contained in the crawl and other meta-data. A meta-data file is built from this analysis.

The metadata documentation describes phase 1 step-by-step.

With data from this phase monolingual data for language model training can be extracted. The data for most of the CommonCrawl crawls and many languages can be found on:

Phase 2: Extracting parallel data and optional cleaning

In the second phase the meta-data collected in phase 1 is used to extract parallel data from CommonCrawl data based on URL pattern matching. Phase 2 is documented step-by-step in the baseline documentation

For the language pairs en↔de, en↔fr, en↔es, en↔it, en↔pt, en↔nl and en↔ru matched URL data for CommonCrawl 2015_32 is available for data extraction in release 0.1.0