Help
Download datasets
Revision as of 17:12, 30 December 2021 by Yug (talk | contribs) (→Use html audios elements in webpages)
This page deals with downloading Lingualibre.org's medias, both by hand and programatically, as packaged zip archives with rich filenames. We then have tutorials on how to clean up the resulting folders and how to rename these files into more practical {language}−{word}.ogg. Be aware of Lingualibre's data's size could be in 100s GB if you download it all.
Revision as of 17:12, 30 December 2021 by Yug (talk | contribs) (→Use html audios elements in webpages)
Data size — 2021/02 | |
---|---|
Audios files | 800,000+ |
Average size | 100kB |
Total size (est.) | 80GB |
Safety factor | 5~10x |
Required disk space | 400~800GB |
Context
Data clean up
- See also Convert files formats • Denoise files • Rename and mass rename
By default, we provide both per-language and all-lingualibre zip archives, which therefor double the data size of your download it all.
Find your target category
- Commons:Category:Lingua Libre pronunciation by user
- Commons:Category:Lingua Libre pronunciation by language
Hand downloading
- Open https://lingualibre.org/datasets/
- Download your target language's zip
- On your device, unzip.
Go to the relevant tutorials to clean up or rename your data.
Using Imker
Requirements
On Ubuntu, run:
sudo apt-get install default-jre # install Java environment
Be aware of your target data size (see section above).
Install
- Open GitHub Wiki-java-tools project page.
- Find the last
Imker
release. - Download Imker_vxx.xx.xx.zip archive
- Extract the .zip file
- Run as follow :
- On Windows : start the .exe file.
- On Ubuntu, open shell then :
$java -jar imker-cli.jar -o ./myFolder/ -c 'CategoryName' # Downloads all medias within Wikimedia Commons's category "CategoryName"
Manual
Imker -- Wikimedia Commons batch downloading tool. Usage: java -jar imker-cli.jar [options] Options: --category, -c Use the specified Wiki category as download source. --domain, -d Wiki domain to fetch from Default: commons.wikimedia.org --file, -f Use the specified local file as download source. * --outfolder, -o The output folder. --page, -p Use the specified Wiki page as download source. The download source must be ONE of the following: ↳ A Wiki category (Example: --category="Denver, Colorado") ↳ A Wiki page (Example: --page="Sandboarding") ↳ A local file (Example: --file="Documents/files.txt"; One filename per line!)
Using CommonsDownloadTool
To download all datasets as zips :
- Download on your large device the scripts :
- Read them a bit, move them where they fit the best on you computer so they require the minimum of editing
- Edit as needed so the paths are correct, make it work.
- Run
create_datasets.sh
successfully - Check if the number of files in the downloaded zips matches the number of files in Commons:Category:Lingua Libre pronunciation
Javascript and/or API queries
There are also ways to use a category name as input, then to do API queries in order to get the list of files, download them. For a start point on API queries, see this pen, which gives some example of API queries.
Use html audios elements in webpages
See Audio 101.