Help
Difference between revisions of "Download datasets"
This page deals with downloading Lingualibre.org's medias, both by hand and programatically, as packaged zip archives with rich filenames. We then have tutorials on how to clean up the resulting folders and how to rename these files into more practical {language}−{word}.ogg. Be aware of Lingualibre's data's size could be in 100s GB if you download it all.
Download of Lingualibre's audio datasets allows external reuse of those audios into native or web applications. LinguaLivre's service of periodic generation of dumps is currently staled, volunteer developers are needed to redevelop it. Current, past and future alternatives are documented below.
Line 1: | Line 1: | ||
{{#SUBTITLE:This page deals with downloading Lingualibre.org's medias, both by hand and programatically, as packaged zip archives with rich filenames. We then have tutorials on how to clean up the resulting folders and how to rename these files into more practical ''{language}−{word}.ogg''. Be aware of Lingualibre's data's size could be in 100s GB if you download it all.}} | {{#SUBTITLE:This page deals with downloading Lingualibre.org's medias, both by hand and programatically, as packaged zip archives with rich filenames. We then have tutorials on how to clean up the resulting folders and how to rename these files into more practical ''{language}−{word}.ogg''. Be aware of Lingualibre's data's size could be in 100s GB if you download it all.}} | ||
+ | {{#Subtitle:'''Download of Lingualibre's audio datasets''' allows external reuse of those audios into native or web applications. LinguaLivre's service of periodic generation of dumps is currently staled, volunteer developers are needed to redevelop it. Current, past and future alternatives are documented below.}} | ||
+ | |||
+ | |||
{| class="wikitable right" style="float:right;" | {| class="wikitable right" style="float:right;" | ||
! colspan=2| Data size — 2021/02 | ! colspan=2| Data size — 2021/02 | ||
Line 13: | Line 16: | ||
! Required disk space || 400~800GB --> | ! Required disk space || 400~800GB --> | ||
|} | |} | ||
+ | |||
== Context == | == Context == | ||
Line 50: | Line 54: | ||
* Scalability tests for resilience with high amounts requests >500 to 100,000 items is required. | * Scalability tests for resilience with high amounts requests >500 to 100,000 items is required. | ||
* Performance improvements are under consideration [https://github.com/kanasimi/wikiapi/issues/51#issuecomment-1002267855 on github]. | * Performance improvements are under consideration [https://github.com/kanasimi/wikiapi/issues/51#issuecomment-1002267855 on github]. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | == Using Imker | + | === Using Imker === |
− | + | Dependencies: | |
− | + | <syntaxhighlight lang="bash"> | |
− | < | + | sudo apt-get install default-jre # install Java environment |
+ | </syntaxhighlight> | ||
− | + | Usage: | |
− | |||
− | |||
* Open [https://github.com/MarcoFalke/wiki-java-tools/releases GitHub Wiki-java-tools project page]. | * Open [https://github.com/MarcoFalke/wiki-java-tools/releases GitHub Wiki-java-tools project page]. | ||
* Find the last <code>Imker</code> release. | * Find the last <code>Imker</code> release. | ||
Line 80: | Line 69: | ||
** On Windows : start the .exe file. | ** On Windows : start the .exe file. | ||
** On Ubuntu, open shell then : | ** On Ubuntu, open shell then : | ||
− | < | + | <syntaxhighlight lang="bash"> |
+ | $java -jar imker-cli.jar -o ./myFolder/ -c 'CategoryName' # Downloads all medias within Wikimedia Commons's category "CategoryName" | ||
+ | </syntaxhighlight> | ||
=== Manual === | === Manual === | ||
− | < | + | <syntaxhighlight lang="bash"> |
+ | Imker -- Wikimedia Commons batch downloading tool. | ||
Usage: java -jar imker-cli.jar [options] | Usage: java -jar imker-cli.jar [options] | ||
Line 102: | Line 94: | ||
↳ A Wiki category (Example: --category="Denver, Colorado") | ↳ A Wiki category (Example: --category="Denver, Colorado") | ||
↳ A Wiki page (Example: --page="Sandboarding") | ↳ A Wiki page (Example: --page="Sandboarding") | ||
− | ↳ A local file (Example: --file="Documents/files.txt"; One filename per line!)</ | + | ↳ A local file (Example: --file="Documents/files.txt"; One filename per line!) |
+ | </syntaxhighlight> | ||
+ | |||
+ | == Datasets (outdated) == | ||
+ | === Former access (outdated) === | ||
+ | # Open https://lingualibre.org/datasets/ | ||
+ | # Download zip name such | ||
+ | #* Target language : <code>{qId}-{iso639-3}-{language_English_name}.zip</code> | ||
+ | #* All languages : https://lingualibre.fr/datasets/lingualibre_full.zip | ||
+ | # On your device, unzip. | ||
+ | Go to the relevant tutorials to clean up or rename your data. | ||
− | == Using CommonsDownloadTool == | + | === Bash / Python === |
+ | Refreshed : auto-run every 2 days.<br> | ||
+ | The scripts : One master script ([https://github.com/lingua-libre/operations/blob/master/create_datasets.sh /lingua-libre/operations/create_datasets.sh]) create the commands. On LinguaLibre, we want to collect audios by languages. [https://github.com/lingua-libre/CommonsDownloadTool lingua-libre/CommonsDownloadTool], a server-side python script, runs them. Python and LinguaLibre knowledge is required.<br> | ||
+ | Evolutions : the page may gain from some html and styling. Proposals go on https://phabricator.wikimedia.org/tag/lingua_libre/ or on the [[LinguaLibre:Chat room]]. | ||
+ | |||
+ | === Using CommonsDownloadTool === | ||
To download all datasets as zips : | To download all datasets as zips : | ||
Line 125: | Line 132: | ||
== See also == | == See also == | ||
{{Lingua_Libre_scripts}} | {{Lingua_Libre_scripts}} | ||
+ | |||
[[Category:Lingua Libre:Help]] | [[Category:Lingua Libre:Help]] |
Revision as of 18:12, 30 December 2021
Data size — 2021/02 | |
---|---|
Audios files | 800,000+ |
Average size | 100kB |
Total size (est.) | 80GB |
Context
Data clean up
- See also Convert files formats • Denoise files • Rename and mass rename
By default, we provide both per-language and all-lingualibre zip archives, which therefor double the data size of your download it all.
Find your target category
- Commons:Category:Lingua Libre pronunciation by user
- Commons:Category:Lingua Libre pronunciation by language
Tools
Python (current)
Petscan and Wikiget allows to download about 15,000 audio files per hour.
- Select your category : see Category:Lingua Libre pronunciation and Category:Lingua Libre pronunciation by user, then find your target category,
- List target files with Petscan : Given a target category on Commons, provides list of target files. Example.
- Download target files with Wikiget : downloads targets files.
Comments:
- Successful on November 2021, with 730,000 audio downloaded in 20 hours. Sustained average speed : 10 downloads/sec.
- Some delete files on Commons may cause Wikiget to return an error and pause. The script has to be resumed manually. Occurrence have been reported to be around 1/30,000 files. Fix is underway, support the request on github.
- WikiGet therefor requires a volunteer to supervise the script while running.
NodeJS (soon)
A WikiapiJS script allows to download target category's files, or a root category, its subcategories and contained files. Downloads about 1,400 audio files per hour.
- WikiapiJS is the NodeJS / NPM package allowing scripted API calls upon Wikimedia Commons and LinguaLibre.
- Specific script used to do a given task:
- Given a category, download all files : https://github.com/hugolpz/WikiapiJS-Eggs/blob/main/wiki-download-many.js
- Given a root category, list subcategories, download all files: https://github.com/hugolpz/WikiapiJS-Eggs/blob/main/wiki-download_by_root_category-many.js
Dependencies: git, nodejs, npm.
Comments, as of December 2021:
- Successful on December 2021, with 400 audios downloaded in 16 minutes. Sustained average speed : 0.4 downloads/sec.
- Successfully process single category's files.
- Successfully process root category and subcategories' files, generating ./isocode/ folders.
- Scalability tests for resilience with high amounts requests >500 to 100,000 items is required.
- Performance improvements are under consideration on github.
Using Imker
Dependencies:
sudo apt-get install default-jre # install Java environment
Usage:
- Open GitHub Wiki-java-tools project page.
- Find the last
Imker
release. - Download Imker_vxx.xx.xx.zip archive
- Extract the .zip file
- Run as follow :
- On Windows : start the .exe file.
- On Ubuntu, open shell then :
$java -jar imker-cli.jar -o ./myFolder/ -c 'CategoryName' # Downloads all medias within Wikimedia Commons's category "CategoryName"
Manual
Imker -- Wikimedia Commons batch downloading tool.
Usage: java -jar imker-cli.jar [options]
Options:
--category, -c
Use the specified Wiki category as download source.
--domain, -d
Wiki domain to fetch from
Default: commons.wikimedia.org
--file, -f
Use the specified local file as download source.
* --outfolder, -o
The output folder.
--page, -p
Use the specified Wiki page as download source.
The download source must be ONE of the following:
↳ A Wiki category (Example: --category="Denver, Colorado")
↳ A Wiki page (Example: --page="Sandboarding")
↳ A local file (Example: --file="Documents/files.txt"; One filename per line!)
Datasets (outdated)
Former access (outdated)
- Open https://lingualibre.org/datasets/
- Download zip name such
- Target language :
{qId}-{iso639-3}-{language_English_name}.zip
- All languages : https://lingualibre.fr/datasets/lingualibre_full.zip
- Target language :
- On your device, unzip.
Go to the relevant tutorials to clean up or rename your data.
Bash / Python
Refreshed : auto-run every 2 days.
The scripts : One master script (/lingua-libre/operations/create_datasets.sh) create the commands. On LinguaLibre, we want to collect audios by languages. lingua-libre/CommonsDownloadTool, a server-side python script, runs them. Python and LinguaLibre knowledge is required.
Evolutions : the page may gain from some html and styling. Proposals go on https://phabricator.wikimedia.org/tag/lingua_libre/ or on the LinguaLibre:Chat room.
Using CommonsDownloadTool
To download all datasets as zips :
- Download on your large device the scripts :
- Read them a bit, move them where they fit the best on you computer so they require the minimum of editing
- Edit as needed so the paths are correct, make it work.
- Run
create_datasets.sh
successfully - Check if the number of files in the downloaded zips matches the number of files in Commons:Category:Lingua Libre pronunciation
Javascript and/or API queries
There are also ways to use a category name as input, then to do API queries in order to get the list of files, download them. For a start point on API queries, see this pen, which gives some example of API queries.
Use html audios elements in webpages
See Audio 101.