Help
Difference between revisions of "Download datasets"
Download of Lingualibre's audio datasets allows external reuse of those audios into native or web applications. LinguaLibre's service of periodic generation of dumps is currently staled, volunteer developers are working on it (Jan. 2022). Current, past and future alternatives are documented below. Other tutorials deal with how to clean up the resulting folders and how to rename these files into more practical {language}−{word}.ogg. Be aware of the overall datasize of estimated 40GB for wav format.
(Marked this version for translation) |
|||
Line 2: | Line 2: | ||
<languages/> | <languages/> | ||
{| class="wikitable right" style="float:right;" | {| class="wikitable right" style="float:right;" | ||
− | ! colspan=2| <translate>Data size — 2022/02</translate> | + | ! colspan=2| <translate><!--T:1--> Data size — 2022/02</translate> |
|- | |- | ||
− | | <translate>Audios files</translate> || 800,000+ | + | | <translate><!--T:2--> Audios files</translate> || 800,000+ |
|- | |- | ||
− | | <translate>Average size</translate> || 100kB | + | | <translate><!--T:3--> Average size</translate> || 100kB |
|- | |- | ||
− | | <translate>Total size (est.)</translate> || 80GB <!-- | + | | <translate><!--T:4--> Total size (est.)</translate> || 80GB <!-- |
|- | |- | ||
− | | <translate>Safety factor</translate> || 5~10x | + | | <translate><!--T:5--> Safety factor</translate> || 5~10x |
|- | |- | ||
− | ! <translate>Required disk space</translate> || 400~800GB --> | + | ! <translate><!--T:6--> Required disk space</translate> || 400~800GB --> |
|} | |} | ||
<translate> | <translate> | ||
− | == Download datasets via click == | + | == Download datasets via click == <!--T:7--> |
+ | <!--T:8--> | ||
'''Download by language:'''</translate> | '''Download by language:'''</translate> | ||
<br> | <br> | ||
<translate> | <translate> | ||
+ | <!--T:9--> | ||
# Open https://lingualibre.org/datasets/ | # Open https://lingualibre.org/datasets/ | ||
# Find your language, naming convention is: <code>{qId}-{iso639-3}-{language_English_name}.zip</code> | # Find your language, naming convention is: <code>{qId}-{iso639-3}-{language_English_name}.zip</code> | ||
Line 26: | Line 28: | ||
# On your device, unzip. | # On your device, unzip. | ||
+ | <!--T:10--> | ||
'''Post-processing'''</translate> | '''Post-processing'''</translate> | ||
− | <br><translate>Refer to the relevant tutorials in [[#See also]] to mass rename, mass convert or mass denoise your downloaded audios. | + | <br><translate><!--T:11--> |
+ | Refer to the relevant tutorials in [[#See also]] to mass rename, mass convert or mass denoise your downloaded audios. | ||
− | == Programmatic tools == | + | == Programmatic tools == <!--T:12--> |
+ | <!--T:13--> | ||
The tools below first fetch from one or several Wikimedia Commons categories the list of audio files within them. | The tools below first fetch from one or several Wikimedia Commons categories the list of audio files within them. | ||
Some of them allow to filter that list further to focus a single speaker, either by editing their code or by post-processing of the resulting .csv list of audio files. The listed targets are then downloaded at a speed of 500 to 15,000 per hours. Items already present locally and matching the latest Commons version are generally not re-downloaded. | Some of them allow to filter that list further to focus a single speaker, either by editing their code or by post-processing of the resulting .csv list of audio files. The listed targets are then downloaded at a speed of 500 to 15,000 per hours. Items already present locally and matching the latest Commons version are generally not re-downloaded. | ||
− | === Find your target === | + | === Find your target === <!--T:14--> |
+ | <!--T:15--> | ||
Categories on Wikimedia Commons are organized as follow: | Categories on Wikimedia Commons are organized as follow: | ||
* [[:Commons:Category:Lingua Libre pronunciation by user]] | * [[:Commons:Category:Lingua Libre pronunciation by user]] | ||
* [[:Commons:Category:Lingua Libre pronunciation]] (by language) | * [[:Commons:Category:Lingua Libre pronunciation]] (by language) | ||
− | === Python (current)=== | + | === Python (current)=== <!--T:16--> |
+ | <!--T:17--> | ||
Dependencies: Python 3.6+ | Dependencies: Python 3.6+ | ||
+ | <!--T:18--> | ||
'''Petscan''' and '''Wikiget''' allows to download about 15,000 audio files per hour. | '''Petscan''' and '''Wikiget''' allows to download about 15,000 audio files per hour. | ||
# '''Select your category :''' see [[:commons:Category:Lingua_Libre_pronunciation|Category:Lingua Libre pronunciation]] and [[:commons:Category:Lingua Libre pronunciation by user|Category:Lingua Libre pronunciation by user]], then find your target category, | # '''Select your category :''' see [[:commons:Category:Lingua_Libre_pronunciation|Category:Lingua Libre pronunciation]] and [[:commons:Category:Lingua Libre pronunciation by user|Category:Lingua Libre pronunciation by user]], then find your target category, | ||
Line 49: | Line 57: | ||
# '''Download target files with [https://pypi.org/project/wikiget/ Wikiget] :''' downloads targets files. | # '''Download target files with [https://pypi.org/project/wikiget/ Wikiget] :''' downloads targets files. | ||
+ | <!--T:19--> | ||
Comments: | Comments: | ||
* Successful on November 2021, with 730,000 audio downloaded in 20 hours. Sustained average speed : 10 downloads/sec. | * Successful on November 2021, with 730,000 audio downloaded in 20 hours. Sustained average speed : 10 downloads/sec. | ||
Line 58: | Line 67: | ||
* Any question about downloading datasets can be made on the Discord Server of Lingua Libre : https://discord.gg/2WECKUHj | * Any question about downloading datasets can be made on the Discord Server of Lingua Libre : https://discord.gg/2WECKUHj | ||
− | === NodeJS (soon) === | + | === NodeJS (soon) === <!--T:20--> |
+ | <!--T:21--> | ||
Dependencies: git, nodejs, npm. | Dependencies: git, nodejs, npm. | ||
+ | <!--T:22--> | ||
A '''WikiapiJS''' script allows to download target category's files, or a root category, its subcategories and contained files. Downloads about 1,400 audio files per hour. | A '''WikiapiJS''' script allows to download target category's files, or a root category, its subcategories and contained files. Downloads about 1,400 audio files per hour. | ||
# WikiapiJS is the NodeJS / NPM package allowing scripted API calls upon Wikimedia Commons and LinguaLibre. | # WikiapiJS is the NodeJS / NPM package allowing scripted API calls upon Wikimedia Commons and LinguaLibre. | ||
Line 68: | Line 79: | ||
#* Given a root category, list subcategories, download all files: https://github.com/hugolpz/WikiapiJS-Eggs/blob/main/wiki-download_by_root_category-many.js | #* Given a root category, list subcategories, download all files: https://github.com/hugolpz/WikiapiJS-Eggs/blob/main/wiki-download_by_root_category-many.js | ||
+ | <!--T:23--> | ||
Comments, as of December 2021: | Comments, as of December 2021: | ||
* Successful on December 2021, with 400 audios downloaded in 16 minutes. Sustained average speed : 0.4 downloads/sec. | * Successful on December 2021, with 400 audios downloaded in 16 minutes. Sustained average speed : 0.4 downloads/sec. | ||
Line 75: | Line 87: | ||
* Performance improvements are under consideration [https://github.com/kanasimi/wikiapi/issues/51#issuecomment-1002267855 on github]. | * Performance improvements are under consideration [https://github.com/kanasimi/wikiapi/issues/51#issuecomment-1002267855 on github]. | ||
− | === Python (slow) === | + | === Python (slow) === <!--T:24--> |
+ | <!--T:25--> | ||
Dependencies: python. | Dependencies: python. | ||
+ | <!--T:26--> | ||
'''CommonsDownloadTool.py''' is a python script which formerly created datasets for LinguaLibre. It can be hacked and tinkered to your needs. To download all datasets as zips : | '''CommonsDownloadTool.py''' is a python script which formerly created datasets for LinguaLibre. It can be hacked and tinkered to your needs. To download all datasets as zips : | ||
* Download scripts : | * Download scripts : | ||
Line 88: | Line 102: | ||
* Check if the number of files in the downloaded zips matches the number of files in [[:Commons:Category:Lingua Libre pronunciation]] | * Check if the number of files in the downloaded zips matches the number of files in [[:Commons:Category:Lingua Libre pronunciation]] | ||
+ | <!--T:27--> | ||
Comments: | Comments: | ||
* Last ran on February 2021, stopped due to slow speed. | * Last ran on February 2021, stopped due to slow speed. | ||
Line 94: | Line 109: | ||
* Proposals go on https://phabricator.wikimedia.org/tag/lingua_libre/ or on the [[LinguaLibre:Chat room]]. | * Proposals go on https://phabricator.wikimedia.org/tag/lingua_libre/ or on the [[LinguaLibre:Chat room]]. | ||
− | === Java (not tested) === | + | === Java (not tested) === <!--T:28--> |
+ | <!--T:29--> | ||
Dependencies: | Dependencies: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
Line 101: | Line 117: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
+ | <!--T:30--> | ||
Usage: | Usage: | ||
* Open [https://github.com/MarcoFalke/wiki-java-tools/releases GitHub Wiki-java-tools project page]. | * Open [https://github.com/MarcoFalke/wiki-java-tools/releases GitHub Wiki-java-tools project page]. | ||
Line 113: | Line 130: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
+ | <!--T:31--> | ||
Comments : | Comments : | ||
* Not used yet by any LinguaLibre member. If you do, please share your experience of this tool. | * Not used yet by any LinguaLibre member. If you do, please share your experience of this tool. | ||
− | ==== Manual ==== | + | ==== Manual ==== <!--T:32--> |
</translate> | </translate> | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
Line 142: | Line 160: | ||
<translate> | <translate> | ||
− | == See also == | + | == See also == <!--T:33--> |
+ | <!--T:34--> | ||
* [[<tvar|1>Special:MyLanguage/Help:Renaming</>|Help:Renaming]] | * [[<tvar|1>Special:MyLanguage/Help:Renaming</>|Help:Renaming]] | ||
* [[<tvar|2>Special:MyLanguage/Help:Converting audios</>|Help:Converting audios]] | * [[<tvar|2>Special:MyLanguage/Help:Converting audios</>|Help:Converting audios]] |
Revision as of 10:51, 20 February 2022
Data size — 2022/02 | |
---|---|
Audios files | 800,000+ |
Average size | 100kB |
Total size (est.) | 80GB |
Download datasets via click
Download by language:
- Open https://lingualibre.org/datasets/
- Find your language, naming convention is:
{qId}-{iso639-3}-{language_English_name}.zip
- Click to download
- On your device, unzip.
Post-processing
Refer to the relevant tutorials in #See also to mass rename, mass convert or mass denoise your downloaded audios.
Programmatic tools
The tools below first fetch from one or several Wikimedia Commons categories the list of audio files within them. Some of them allow to filter that list further to focus a single speaker, either by editing their code or by post-processing of the resulting .csv list of audio files. The listed targets are then downloaded at a speed of 500 to 15,000 per hours. Items already present locally and matching the latest Commons version are generally not re-downloaded.
Find your target
Categories on Wikimedia Commons are organized as follow:
- Commons:Category:Lingua Libre pronunciation by user
- Commons:Category:Lingua Libre pronunciation (by language)
Python (current)
Dependencies: Python 3.6+
Petscan and Wikiget allows to download about 15,000 audio files per hour.
- Select your category : see Category:Lingua Libre pronunciation and Category:Lingua Libre pronunciation by user, then find your target category,
- List target files with Petscan : Given a target category on Commons, provides list of target files. Example.
- Download target files with Wikiget : downloads targets files.
Comments:
- Successful on November 2021, with 730,000 audio downloaded in 20 hours. Sustained average speed : 10 downloads/sec.
- Some delete files on Commons may cause Wikiget to return an error and pause. The script has to be resumed manually. Occurrence have been reported to be around 1/30,000 files. Fix is underway, support the request on github.
- WikiGet therefore requires a volunteer to supervise the script while running.
- As of December 2021, WikiGet does not support multi-thread downloads. Therefore, to increase the efficiency of the download process it is recommended to run the Python Script on 20-30 terminal windows simultaneously. Each terminal running WikiGet would consume an average of 20 Kb/s.
- WikiGet requires an stable internet connection. Any disruption of 1 second would stop the download process and it requires manual restart of the Python Script.
- Manual for PetScan
- Any question about downloading datasets can be made on the Discord Server of Lingua Libre : https://discord.gg/2WECKUHj
NodeJS (soon)
Dependencies: git, nodejs, npm.
A WikiapiJS script allows to download target category's files, or a root category, its subcategories and contained files. Downloads about 1,400 audio files per hour.
- WikiapiJS is the NodeJS / NPM package allowing scripted API calls upon Wikimedia Commons and LinguaLibre.
- Specific script used to do a given task:
- Given a category, download all files : https://github.com/hugolpz/WikiapiJS-Eggs/blob/main/wiki-download-many.js
- Given a root category, list subcategories, download all files: https://github.com/hugolpz/WikiapiJS-Eggs/blob/main/wiki-download_by_root_category-many.js
Comments, as of December 2021:
- Successful on December 2021, with 400 audios downloaded in 16 minutes. Sustained average speed : 0.4 downloads/sec.
- Successfully process single category's files.
- Successfully process root category and subcategories' files, generating ./isocode/ folders.
- Scalability tests for resilience with high amounts requests >500 to 100,000 items is required.
- Performance improvements are under consideration on github.
Python (slow)
Dependencies: python.
CommonsDownloadTool.py is a python script which formerly created datasets for LinguaLibre. It can be hacked and tinkered to your needs. To download all datasets as zips :
- Download scripts :
- create_datasets.sh - creates CommonsDownloadTool's commands.
- CommonsDownloadTool/commons_download_tool.py - core script.
- Read them a bit, move them where they fit the best on you computer so they require the minimum of editing
- Edit as needed so the paths are correct, make it work.
- Run
create_datasets.sh
- Check if the number of files in the downloaded zips matches the number of files in Commons:Category:Lingua Libre pronunciation
Comments:
- Last ran on February 2021, stopped due to slow speed.
- This script is slow and has been phased out as Lingualibre grown too much.
- The page may gain from some html and styling.
- Proposals go on https://phabricator.wikimedia.org/tag/lingua_libre/ or on the LinguaLibre:Chat room.
Java (not tested)
Dependencies:
sudo apt-get install default-jre # install Java environment
Usage:
- Open GitHub Wiki-java-tools project page.
- Find the last
Imker
release. - Download Imker_vxx.xx.xx.zip archive
- Extract the .zip file
- Run as follow :
- On Windows : start the .exe file.
- On Ubuntu, open shell then :
$java -jar imker-cli.jar -o ./myFolder/ -c 'CategoryName' # Downloads all medias within Wikimedia Commons's category "CategoryName"
Comments :
- Not used yet by any LinguaLibre member. If you do, please share your experience of this tool.
Manual
Imker -- Wikimedia Commons batch downloading tool.
Usage: java -jar imker-cli.jar [options]
Options:
--category, -c
Use the specified Wiki category as download source.
--domain, -d
Wiki domain to fetch from
Default: commons.wikimedia.org
--file, -f
Use the specified local file as download source.
* --outfolder, -o
The output folder.
--page, -p
Use the specified Wiki page as download source.
The download source must be ONE of the following:
↳ A Wiki category (Example: --category="Denver, Colorado")
↳ A Wiki page (Example: --page="Sandboarding")
↳ A local file (Example: --file="Documents/files.txt"; One filename per line!)